Chicken CAM!

Preamble

Cluck!!

It’s amazing what you can do when you have too much time on your hands! I wanted, for a long time, to have a webcam monitoring my chickens. More just for fun more than any other reason. It has however giving me greater insight into there lives!

Chicken feed

Hardware

The major problem with the installation is that is it is quite a long way to the chicken run.

I had a choice of either

Run mains power – This would be a really a big job requiring full isolation and burying the cable the requisite depth. So this choice was out.

Run a battery/solar webcam – I would have loved this route as it initially seems the the simplest. On investigation all webcams of this type, that I could find, were not rtsp. Thus they use a proprietary protocol which would not lend itself easily to this development.

Passive POE – the hardware consists a of an injector (to put power in) and a splitter (to take power out) With this setup you plug the webcam power adaptor into the injector and the 12v are carried along 4 of the 8 wires in the cable to the splitter where the power is separated again.

Active POE – this is similar, but uses a custom power supply to inject the power. In my case a 48v supply. Due to a higher transmitted voltage the length of the cable can be greater. Greater voltage implies less resistance for the same power. Power = Voltage x Current, and Resistance = Voltage/Current.

As I have a long run to the chicken run, obviously Active POE was my best choice and I chose to use the TP-Link injector and splitter (TP-LINK TL-POE200).

Running an external wire also poses 2 risks, lightening risk and a hacking risk.

hacking risk – Someone could connect directly to my network. Who would be that mad? Russians or local boy scouts? I would be very interested to find out! The hacking risk is relatively trivial as I don’t l let any of the data escape my IOT vlan inside the house..

lightening risk – I deemed this a little more serious, so purchased a surge protector.

This device ‘should’ earth any lightening and save my house and house side hardware from possible fire. I didn’t purchase one for the chicken side as they are quite expensive, so if it gets hit, that side will be toast! I’d lose maximum an old webcam and an POE splitter.

I put it all together in an ABS plastic project box that has a silicone O-ring that runs around the joint. Also after gluing on the camera, I ran some silicone bathroom sealant around he camera in order to try to keep out any remaining water. Hopefully it will be watertight for a few years.

The plan for deployment was:

switch ->cable -> POE injector -> cable -> isolator (under house) -> cable ->POE splitter -> cable -> camera

In order to make minimal holes in my house I only drilled a 1/4 inch hole by the switch to feed the wire under the house, and fed the chicken run bound cable in through an air brick. This left me with the task of crimping on some RJ45 connectors under the house (not the best place to be doing this!)

Never having crimped any RJ45 connector before, I discovered this is not as easy as it looks. In future I will probably buy a cable if I can rather than fiddle with making them up. Anyone wanting to make some, I would suggest having some good practice first.

I also discovered that there exists pass through connectors which allow the wires to come completely through so it is a lot easier to check you have located the wires correctly and in the right order. However, be warned, you MUST trim the extra lengths completely. I install everything and then my switch was reporting ‘crosstalk’, which after some pondering I gathered was a partial short at the end of the RJ45. After cleaning this up, all worked well.

Software

The software stack is

Camera-> FFmpeg -> Webpage

Camera – I used “IP camera viewer 2” on my mac to check out the camera and work out the RTSP connection string, and then later tried “Surveillance Station” on my NAS to check it all worked.

FFmpeg – Is the glue I used to generate MPEG DASH files that could be picked up in a web browser. I followed Kevin Godell’s guide to its use, which can ca be found at https://gist.github.com/kevinGodell/f91ae4087b0a56c138842dbb40cbfe7c

The standard FFmpeg that comes with synology is quite old and restricted so I used the one published from the Synology community at http://packages.synocommunity.com

Initially when I tried this the feed always stopped fractionally short of 30 seconds. I need to add the command “-rtsp_transport tcp”. For some reason I think the UDP flow is getting broken, but in my small setup don’t think running with TCP will make much difference.

I currently run FFmpeg in a script at boot up.

As this project is only for research and not for profit I expect very few hits however I did decide to limit the outbound bandwidth just to be sure, if say China decides its a good idea to look at my chickens all through lockdown!

Conclusion

Great fun putting it together, however making RJ45 connectors was very fiddly.

Dave, I’m afraid I can’t let you do that…..

My website has been very inactive for quite a long time… Just chilling enjoying it’s life… when this week it got blacklisted by google!!
Anyone trying to visit the site using chrome got hit with a VERY large RED screen! Quite frankly even I felt very shy to enter my own site. Had I been hacked? Was the web as bad as they say. Had China/Russia/Cia/Fbi infiltrated my site to sell Bitcoins or dispense spam?
No! I believe my certificate had expired/was miss configured and a web crawler saw a lot of javascript/code stuff and said hey! this site is dangerous.
Why am I complaining as now my site is up and running? Firstly I upgraded the backend of the site and checked for hacking. Couldn’t find any evidence of hacking, and redid the certificates. I believe the site was flagged for social engineering?
Why am I complaining? Chrome has 68% market share of the web, and safari uses googles site to valid websites, so effectively Chrome/Google CAN shut down the WEB!
I was never scared of how the web was developing until now. Effectively anyone can put up a website, but it can be directly shut down. The web is morphing more into a conduit for netflix and facebook (other brands are available and appear daily!) and crowding out the little man who has little power.
Anyone who doesn’t understand technology is walking blindly into an abyss where the greatest invention since the printing press is held captive for money and political power.
It looks like open source software is probably the only way forward, but the web is still vulnerable to denial of service attacks from CDNs (i.e. kicked off cloudflare) or routed with insufficient bandwidth by internet providers, or god forbid restricted by governments.
I am not a political animal and I leave watching the watchers to better websites such as bellingcat.com, but these can be shutdown in an instant. Even the dark web requires a conduit to work.
I refer you to the clash song “know your rights”

VPN synology with mac and S9

Preamble
I eventually got around to sorting out my IPSec/L2TP VPN, but found it was pain!
Firstly got both Mac and S9 connecting but was unable to see the actual network, both required slightly different fixes.

General
When you connect to your network the ip handed out is on a different LAN. Say normally your NAS is 198.168.1.50, when you connect you will receive say ip 10.7.0.1 from the VPN server on the NAS. The NAS will now have a local ip address on the same network at 10.7.0.0, hence you can still have access to the NAS.
My problem was that I could not see the rest of the network.

For the MAC
I struggled thinking it was a server side problem until I did a traceroute on news.bbc.co.uk, and noticed the route didn’t involve my NAS. There is a little button on the VPN connection setting on a MAC under the advanced tab, under options, that says “Send all traffic over VPN connection”. Clicking this box allowed me to view my whole network, and also get bbc news routed that way also!

For S9
I had a similar problem on S9, but under the VPN settings, advanced, I found I could specify my DNS. And after that I had access to my local network again.

Conclusion
VPN stuff should be trivial, but appears solutions are buried in the details.

SIP/VOIP at home

Preamble
I wished to separate the phone line from the internet supplier in order to make cheaper international phone-calls. I have achieved this by implementing a SIP/PTSN combo solution, routing some normal PTSN traffic over a SIP connection instead.

Hardware

To do this I purchased two grandstream products HT802, HT813 Analog Telephone Adaptor (ATA) devices.

HT802 – This device allows 2 phones (2 FXS ports) to be connected via VOIP
HT813 – This device allows 1 phone (1 FXS port) to be connected to VOIP and 1 line out to the phone line (via FXO port)

FXS ports will connect to a phone.
FXO ports will connect to a phone wall socket going to the exchange (PSTN).

I combined these with Asterisk to provide a VOIP solution. I run Asterisk as a container on docker on my synology NAS.

Mostly I followed the article.

https://theworklife.com/graham-miln/2017/12/06/voip-asterisk-synology-via-docker/

But I ended up using a different supplier of the docker image, as the original one was not readily available.

Installing a docker image is relatively simple, and once it’s running you can connect to it easily and treat it as an independent computer.

In the Asterisk package I needed to configure extensions.conf and sip.conf in order to match my needs, but other than that, it seems to run out of the box.
I did force the SIP connection to use ether ulaw or alaw encoding. Reading around the subject these are the primary codecs used in Europe and America. Both should allow the best quality I can get for my phone line.

House wiring

The wiring of the house however wasn’t as trivial.

Grandstream ports are all rj11, and I needed to map the bt wiring to these. Apparently rj11 connections are generally set up so the middle 2 pins carry the signal, rather than 2,5 in the bt system. So after a bit of googling I set this up and it connects well. (As one article remarked all we need is another standard!)

The second bit of wiring consisted of matching old bt 4 core blue, green, orange and brown to the 2 centre wires of a rj11. It appears that in my case the blue and the orange carry the signal.

The total pieces of kit, and house wiring have given me three separate phone zones, that I can control independently.

Infrastructure

I have 2 switches I needed to configure.

T2600G-28TS
This is a layer 2+ switch which can be configured with a voice vlan. I did however, have to append to the OUI settings the mac address prefix of grandstream products, so the switch knows it’s dealing with voice packets.

TL-SG1024DE
This is a simpler managed switch, and I turned QoS to use 802.1P Based.
In order to make this work, I made the grandstream devices set SIP 802.1p and RTP 802.1p to the value 5.
The setting (5) should mean it’s voice traffic, and get routed appropriately quickly through the switch. I did however have to set the 802.1Q/VLAN Tag on the grandstream devices in order to make the 802.1p take effect.
I didn’t use DSCP Based, as I though it would give me less flexibility in the future, although for a ‘normal’ installation I suspect it might be simpler.

Testing
I have tried numerous phonecalls to different zones in the house. But it was strange when I had to really use the system to call the PTSN line in anger, I did feel a bit anxious… Thankfully I think the line was actually clear than the previous direct PTSN, mainly due to less telephones hanging off a single PTSN.
I have tried phoning some numbers abroad, but the SIP connection is really in beta test at present. I did notice no perceivable difference in quality between using my PTSN to overseas as compared to the SIP connection. The connection phase however is a little more clunky as the phonecall has to work its way through 1 grandstream device, Asterisk and then the SIP carrier.

ToDo
I am unhappy that currently I am using 2 stage dialling as mentioned in the link I copied the implementation from.
e.g. Dial(SIP/ht503fxo,60,D(w${EXTEN})) I believe it is possible to change to 1 stage dialling, but have not tried yet.

I also need to add voicemail and conferencing functionality once I am happy with the current setup.

conclusion
Seems to work, but need to test out the SIP international calling more, check the clarity is good.

Synology DS918+ on multiple subnets/vlans – overload nic/open vSwitch

Preamble
My aim was to introduce some security into my home network by using subnets and vlans. The issue is I wanted to make my main fileserver accessible on each network, preferably not going via the router. Going via the router, rather than directly, puts load on the router for no reason.

Solution
Initially I used http://www.mybenke.org/?p=2373 excellent solution. This works effectively for bonds or single connections.

However As I began experimenting with virtual machines/docker I realised I had to enable open vswitch in order to route to the internal virtual machines/docker bridge. Turning on vSwitch means that all routing is now done in software and the overloading of the cards as before needs some tweaking to get it to work.
These links were useful but ultimately don’t provide a solution.
https://forum.synology.com/enu/viewtopic.php?t=141679
https://forum.synology.com/enu/viewtopic.php?t=133189

The issues are:
1) They talk about bonding via the command line but I only have 2 ethernet connections so once half the setup is done I would loose connection.
2) The overloading doesn’t survive a reboot.

I found you CAN create the bond/open vSwitch setup via the gui and doing the above still works.

However having done this it doesn’t survive a reboot. In order to do that you need modify the /etc/rc.network file. Add

[js]
is_ovs_child_interface ()
{
unset OVS_PARENT
ifconfigFile=”/etc/sysconfig/network-scripts/ifcfg-$1″
if ! source “$ifconfigFile” ; then
return 1
fi

if [ ! -z $OVS_PARENT ]; then
return 0
fi

return 1
}
[/js]

And change activate_ovs as below. It does work, but note it’s a little broken as generates an ovs_ovs_ovs_ovs… file in the network directory. I didn’t bother to fix it.

[js]
activate_ovs ()
{
[ $# -ne 0 ] || return

if ! is_ovs_enable; then
return 0
fi

#If the external interface has been remove.
#Remove the ifcfg of ovs and modify the ovs_interface.conf.
local tmpdev=`/bin/grep -v ovs /proc/net/dev | /bin/grep ${1##ovs_}` > /dev/null 2>&1
if [ -z “$tmpdev” ] && ! is_ovs_child_interface “$1” ; then
ovs-vsctl del-br $1
sed -i.bak “/${1##ovs_}/”d /usr/syno/etc/synoovs/ovs_interface.conf
rm /etc/sysconfig/network-scripts/ifcfg-$1
return 0
fi

ifconfigFile=”/etc/sysconfig/network-scripts/ifcfg-$1”
if ! source “$ifconfigFile” ; then
return
fi

check_exist_ovs $1
if ! is_ovs_child_interface “$1” ; then
/bin/ovs-vsctl add-br $1
fi
if is_ovs_child_interface “$1” ; then
/bin/ovs-vsctl add-br $1 $OVS_PARENT $OVS_VLAN_ID
fi
set_ovs_mac_address $1
for device in `grep -l “^BRIDGE=$1$” /etc/sysconfig/network-scripts/ifcfg-*` ; do
DEVICE=`basename ${device} | cut -d ‘-‘ -f 2`
# Config bridge of wlan in /etc/hostapd/hostapd.conf and controlled by hostapd (activate_ap)
find=`echo “${DEVICE}” | grep -c wlan`
if [ $find -eq 1 ]; then
# only wired interface will be added to bridge as a slave
# wlan interface should be handled by synowifid
continue
fi
$SYNONET –set_ip -4 $DEVICE flush
/sbin/ifconfig $DEVICE up
echo 1 >> /proc/sys/net/ipv6/conf/${DEVICE}/disable_ipv6
/bin/ovs-vsctl add-port $1 ${DEVICE}

#Set MTU
MTU_VALUE=`get_mtu_value ${DEVICE}`
MTU=”” ; [ -n “${MTU_VALUE}” ] && MTU=”mtu ${MTU_VALUE}”
ifconfig ${DEVICE} ${MTU}
done

start_ovs_vlan $1 $ifconfigFile
setup_ovs_default_flow $1

/sbin/ip link set dev $1 up

return 0
}

[/js]

I didn’t go on to fix this as I decided this configuration is definitely not supported and would probably require reapplying on a system update. I don’t want the overhead of maintaining this. This has lead me to the purchase of a layer 2+ (layer 3) switch.

Apologies this post isn’t a perfect description how to do the above, but I am writing this a few months later, but if you get stuck please feel free to drop me a line.

conclusion

You can do it, but definitely not synology supported, and a new piece of hardware is a lot better.

DNS ad-blocking for IPv6

Preamble
My Network is running both IPv6 and IPv4, but IPv6 traffic was missing the local DNS.

Solution
Firstly needed to give my DNS a fixed IPv6 address, rather than let DHCP decide the address.
I read this: https://en.wikipedia.org/wiki/Unique_local_address
After reading this I get the feeling IPv6 still has a few teething problems.
The bottom line I believe is you need to pick a value in FD00:: range. Say FD00::101
Thus I can now point my router gateway at my newly IPv6 DNS address.

Secondly my ad-blocker script now needs a IPv6 “AAAA” routing entries.
This is a slight alternation to the script (at the bottom) given by
https://synologytweaks.wordpress.com/2015/08/23/use-synology-as-an-ad-blocker/
Thanks again for the starting point!

[js]
#!/bin/sh
#================================================================================
# (C)2013 dMajo
# Title : ad-blocker.sh
# Version : V1.02.0018
# Author : dMajo (http://forum.synology.com/enu/memberlist.php?mode=viewprofile&u=69661)
# Description : Script to block add-banner servers, dns based
# Dependencies: Syno DNSServer package, sed, wget
# Usage : sh ad-blocker.sh
#================================================================================
# Version history:
# 2013.09.01 – 1.00.0001: Initial release
# 2013.09.08 – 1.00.0004: Fix: changed include target to support views
# 2013.09.12 – 1.00.0005: Added automatic zone file generation and some basic error handling
# 2014.03.29 – 1.01.0013: Added dependencies check
# 2014.03.30 – 1.02.0017: Script reorganized
# 2014.04.06 – 1.02.0018: Fix: fixed serial number in zone file generation
#================================================================================

# Define dirs
RootDir=”/var/packages/DNSServer/target”
ZoneDir=”${RootDir}/named/etc/zone”
ZoneDataDir=”${ZoneDir}/data”
ZoneMasterDir=”${ZoneDir}/master”

cd ${ZoneDataDir}

# Check if needed dependencies exists
Dependencies=”chown date grep mv rm sed wget”
MissingDep=0
for NeededDep in $Dependencies; do
if ! hash “$NeededDep” >/dev/null 2>&1; then
printf “Command not found in PATH: %s\n” “$NeededDep” >&2
MissingDep=$((MissingDep+1))
fi
done
if [ $MissingDep -gt 0 ]; then
printf “Minimum %d commands are missing in PATH, aborting\n” “$MissingDep” >&2
exit 1
fi

# Download the “blacklist” from “http://pgl.yoyo.org”
wget “http://pgl.yoyo.org/as/serverlist.php?hostformat=bindconfig&showintro=0&mimetype=plaintext”

# Modify Zone file path from “null.zone.file” to “/etc/zone/master/null.zone.file” in order to comply with Synology bind implementation
rm -f ad-blocker.new
sed -e ‘s/null.zone.file/\/etc\/zone\/master\/null.zone.file/g’ “serverlist.php?hostformat=bindconfig&showintro=0&mimetype=plaintext” > ad-blocker.new
rm “serverlist.php?hostformat=bindconfig&showintro=0&mimetype=plaintext”
chown -R nobody:nobody ad-blocker.new
if [ -f ad-blocker.new ] ; then
rm -f ad-blocker.db
mv ad-blocker.new ad-blocker.db
fi

# Include the new zone data
if [ -f ad-blocker.db ] && [ -f null.zone.file ]; then
grep -q ‘include “/etc/zone/data/ad-blocker.db”;’ null.zone.file || echo ‘include “/etc/zone/data/ad-blocker.db”;’ >> null.zone.file

# Rebuild master null.zone.file
cd ${ZoneMasterDir}
rm -f null.zone.file
Now=$(date +”%Y%m%d”)
echo ‘$TTL 86400 ; one day’ >> null.zone.file
echo ‘@ IN SOA ns.null.zone.file. mail.null.zone.file. (‘ >> null.zone.file
# echo ‘ 2013091200 ; serial number YYYYMMDDNN’ >> null.zone.file
echo ‘ ‘${Now}’00 ; serial number YYYYMMDDNN’ >> null.zone.file
echo ‘ 86400 ; refresh 1 day’ >> null.zone.file
echo ‘ 7200 ; retry 2 hours’ >> null.zone.file
echo ‘ 864000 ; expire 10 days’ >> null.zone.file
echo ‘ 86400 ) ; min ttl 1 day’ >> null.zone.file
echo ‘ NS ns.null.zone.file.’ >> null.zone.file
echo ‘ A 127.0.0.1’ >> null.zone.file
echo ‘ AAAA ::1’ >> null.zone.file
echo ‘* IN A 127.0.0.1’ >> null.zone.file
echo ‘* IN AAAA ::1’ >> null.zone.file
fi

# Reload the server config after modifications
${RootDir}/script/reload.sh

exit 0
[/js]

google DNSs
In addition I made a static link from 8.8.8.8, 8.8.4.4 to my own DNSs as a lot of devices use the google servers directly.

Conclusion
Seems to be working, but watch this space.

VLANs on home network with synology and draytek modem

Premable
I am keen to implement VLANs on my home network as I wish to segregate some computers so I can play with possibly network destroying stuff in a separate area.

Key facts
VLANs only segregate IPv4 data, not IPv6; for IPv6 you need separate address spaces.
VLAN id’s don’t propagate to wireless network, you need to run a separate VID (wireless network) for that.
A synology BOND needs some hacking to support VLANs. However it appears the new BOND has to have a different ip address for it to appear on the network. I believe this is down to having one ARP table and same MAC address but this is the limit of my knowledge at the moment.

Implementing VlANs
I created a separate wifi connection for the default VLAN (which is normally enabled) so I had management access to the switches, when VLANs were turned on.

Although my switches are not tp-link devices I found this article very useful.
https://forum.tp-link.com/showthread.php?76663-TP-LINK-TL-SG108E-VLAN-configuration-issue
The key points are:
Add any device that is VLAN aware, or under dhcp as a “tagged’ port.
Any VLAN unaware device mark the port as “untagged”, and in addition set a PVID id, on the port. Then when data leaves the port the data is untagged, when data enters the port the data is tagged.
“Tag” any trunk between switches.

Configuring Synlogy
I have a ds918+ which has 2 ethernet connects which I join together to make a single bond. To add VLANs I followed the article
http://www.mybenke.org/?p=2373
However he shows two bonds having the same ip address. This appears to work, i.e. synology comes up, but my TV on the new VLAN couldn’t detect the synology box. I thus had to use a separate ip address on the new VLAN.

Synology current issues
I now worry I have a problem my router isn’t routing to the correct DNS address for the new VLAN.
I have two VLANs say 1 and 2.
On VLAN 1 the DNS is at 192.168.1.101 (synlogy box)
On VLAN 2 the DNS is at 192.168.1.102 (synlogy box)
On the LAN setup of the router (192.168.1.1) there is only one place were I can specify a DNS. So a request on VLAN 2 (if I use the router as the DNS) can’t route to the correct DNS. – Solution is specify a different DNS address for every device on VLAN 2 (not ideal, but will work)
Other issue is that the Media Server DNLA needs to specify a BOND, and I may need to run another instance, having said that I haven’t got the media server to work on either BOND yet.

Conclusion
It seems to work, but rebooting my router a lot caused my vdsl line speed to drop. (hopefully it will train up again). Aside: also I believe I a had a “real” call from talktalk! Checking about the line flapping, instead of a hacker… Might have been a hacker (checking id), but the conversation was very brief, checked my name, asked me if my internet was working! Wasn’t the usual hacker script…

All polymorphism isn’t made equal.

Preamble
I was drawn to making a vector of heterogeneous types. The simplest way is to use a class derived from a base class and store points to the base class in the vector. This is simple but does not scale well as unrelated types must now have a common base class.
So as an example I wished to try to store

[js]
class square
{
public:
string name() const { return “square”; };
};

class circle
{
public:
string name() const { return “circle”; };
};

class triangle
{
public:
string name() const { return “triangle”; };
};
[/js]

in a std::vector<>.

First try

So I started to investigate std::vector<std::variant>. This does seem to provide what I required.
[js]
// Variant implementation
using shapeType = variant;
using shapesVec = vector;
[/js]
But if you want to do sorting you need to do something like.
[js]
auto name = [](auto&& l_)-> string { return (visit([](auto&& s_) -> string { return s_.name(); }, l_)); };

auto sortByName = [](const shapeType& l_, const shapeType& r_)
{
return name(l_) < name(r_); }; [/js] utilizing std::visit. In my opinion the std::visit seems to do the right thing but as we have to call down into a lambda, it does add a layer of indirection. You could do this with constexpr get<> directly in the sortByName expression, but does seem a lot of trouble.

Second try
Going back basics you could stay make a generic object_t that hides they type and store that in the std::vector. I kind of like this a bit more, but it is a bit more wordy and complicated, but more open to extension. i.e. by changing the unique_ptr to a shared_ptr allows the class to take objects of objects without copying.
See Sean Parent, Better Code: Runtime Polymorphism: https://www.youtube.com/watch?v=QGcVXgEVMJg

[js]
class object_t
{
public:
struct concept_t
{
virtual ~concept_t() = default;
virtual unique_ptr copy() const = 0;
virtual string name() const = 0;
};

template
struct model final : concept_t
{
model(T x_) : _data(move(x_)) {}
virtual unique_ptr copy() const override
{
return make_unique(*this);
}
virtual string name() const override
{
return _data.name();
}
T _data;
};

unique_ptr _self;

string name() const
{
return _self->name();
}

template
object_t(const T& x_) { _self = make_unique>(move(x_)); }

object_t(const object_t& x_) { _self = x_._self->copy(); }

object_t(object_t&& x_) = default;
object_t& operator=(object_t&& x_) = default;
};

using objectVec = vector;

auto sortByName2 = [](const object_t& l_, const object_t& r_)
{
return l_.name() < r_.name(); }; [/js] Summary

std::variant seems to provide a simple way of doing a heterogeneous vector, but I still think there are better ways. std::variant has to have padding, but it doesn’t do any new’s, and copying of vector will be always by value type.

Using type hiding in a wrapper object gives the same as std::variant but although more wordy is open to extension.

So my feeling is that you still can’t really use std::variant to make a heterogeneous collection in a reasonable way.

Code in full

[js]
//============================================================================
// Name : poly.cpp
// Author :
// Version :
// Copyright : Your copyright notice
// Description : Hello World in C++, Ansi-style
//============================================================================

#include
#include
#include
#include
#include
using namespace std;

class square
{
public:
string name() const { return “square”; };
};

class circle
{
public:
string name() const { return “circle”; };
};

class triangle
{
public:
string name() const { return “triangle”; };
};

// Variant implementation
using shapeType = variant;

auto name = [](auto&& l_)-> string { return (visit([](auto&& s_) -> string { return s_.name(); }, l_)); };

auto sortByName = [](const shapeType& l_, const shapeType& r_)
{
return name(l_) < name(r_); }; using shapesVec = vector;

// type hiding implementation
class object_t
{
public:
struct concept_t
{
virtual ~concept_t() = default;
virtual unique_ptr copy() const = 0;
virtual string name() const = 0;
};

template
struct model final : concept_t
{
model(T x_) : _data(move(x_)) {}
virtual unique_ptr copy() const override
{
return make_unique(*this);
}
virtual string name() const override
{
return _data.name();
}
T _data;
};

unique_ptr _self;

string name() const
{
return _self->name();
}

template
object_t(const T& x_) { _self = make_unique>(move(x_)); }

object_t(const object_t& x_) { _self = x_._self->copy(); }

object_t(object_t&& x_) = default;
object_t& operator=(object_t&& x_) = default;
};

using objectVec = vector;

auto sortByName2 = [](const object_t& l_, const object_t& r_)
{
return l_.name() < r_.name(); }; int main() { shapesVec data; data.emplace_back(square()); data.emplace_back(circle()); data.emplace_back(triangle()); for(const auto& v: data) { visit([](auto&& s_) { cout << s_.name() << " "; }, v); } cout << endl; sort(data.begin(), data.end(), sortByName); for(const auto& v: data) { visit([](auto&& s_) { cout << s_.name() << " "; }, v); } shapesVec dataa(data); cout << endl; for(const auto& v: dataa) { visit([](auto&& s_) { cout << s_.name() << " "; }, v); } cout << endl; objectVec data2; data2.emplace_back(square()); data2.emplace_back(circle()); data2.emplace_back(triangle()); for(const auto& v: data2) { cout << v.name() << " "; } cout << endl; sort(data2.begin(), data2.end(), sortByName2); for(const auto& v: data2) { cout << v.name() << " "; } objectVec data3(data2); cout << endl; for(const auto& v: data3) { cout << v.name() << " "; } return 0; } [/js]

Algorithm trumps cache locality-big O matters

preamble
Investigating
https://www.geeksforgeeks.org/write-a-c-program-that-given-a-set-a-of-n-numbers-and-another-number-x-determines-whether-or-not-there-exist-two-elements-in-s-whose-sum-is-exactly-x/
I was wondering if cache locality trumps using a hashed set, on a modern CPU.
There are 3 implementations of the search.

  • Naive one – iterate over all possible combinations
  • Sorting – sort vector and random walk vector to find target sum
  • Hash set – using hash set to store previously tested numbers
  •  
    Investigation
    For small sets of data say 100 elements, the random walk pattern looks more successful, as the hashing/inserting in the third solution is taking way too much time.
    [js]
    ————————————————–
    Benchmark Time CPU Iterations
    ————————————————–
    BM_METHOD1  14700 ns 14676 ns  43151
    BM_METHOD2  1540 ns 1539 ns  447113
    BM_METHOD3  23578 ns 23554 ns 29603
    [/js]
    For larger sets of data say 1000000, the sorting of the random walk pattern appears to dominate and then the hashing solution is definitely a clear winner.
     
    [js]
    ————————————————–
    Benchmark Time CPU Iterations
    ————————————————–
    BM_METHOD1  108818475 ns 108747000 ns  6 
    BM_METHOD2  66346574 ns 66334000 ns  10
    BM_METHOD3  1583479 ns 1583115 ns  436
    [/js]

    Conclusion
    You need to benchmark if in any doubt. Algorithm always trumps caching on large scale, but on small scale the expense of the algorithm becomes important.

    Full code

    Continue reading “Algorithm trumps cache locality-big O matters”

    PIWIK and Cloudflare using one dynamic ip not working – now half fixed

    Preamble
    I noticed when I started using Cloudflare I was not getting any hits in piwik.
    Actually it was a bit strange…

  • on my local network I could get hits
  • My mobile phone generates hits
  • No hits appeared to be coming through any Cloudflare CDN

     
    Solution
    To fix the issue I made the dns at Cloudflare look like

  • CNAME brombo.co.uk => dynamic ip address
  • CNAME wwww => brombo.co.uk
  • CNAME piwik => dynamic ip address
  •  
    Investigation
    I strongly suspected that it was a caching issue as that is what CDN does.
    I tried turning off caching on Cloudflare, but this didn’t fix the issue.
    The solution appears to be to separate your web ip, from you piwik ip in the dns.

    Initially I had

  • www.brombo.co.uk as my website
  • www.brombo.co.uk/piwik as my piwik site
  • Hence they share the same root website, and I suspect my tracking java code was being redirected to the wrong ip.

    I then changed my sites to

  • www.brombo.co.uk – website
  • piwik.brombo.co.uk – piwik site, via adding a dns entry/virtual host
  •  
    This still didn’t fix it as my dns looked something like this

  • CNAME brombo.co.uk => dynamic ip address
  • CNAME wwww => brombo.co.uk
  • CNAME piwik => brombo.co.uk
  •  
    I suspect Cloudfare just breaks the alias at brombo.co.uk hence piwik.brombo.co.uk still didn’t work.

    To fix the issue I made the dns at Cloudflare look like

  • CNAME brombo.co.uk => dynamic ip address
  • CNAME wwww => brombo.co.uk
  • CNAME piwik => dynamic ip address
  •  
    Now I see to be able to generate hits using
    https://www.geoscreenshot.com/capture
     
    ToDo
    I have tried using the piwik app on Cloudflare but this currently doesn’t appear to generate any hits for me. Also the ip addresses I am currently receiving I believe are actually Cloudflare ones… I need to add the mod to the synology to fix this.