(28th August 2016)
|
Traditionally computers were named and not easily replaced in the event it broke down. Server software was listening on a hard-coded port, and to link pieces together these machine names and service ports were hard-coded into other software's configuration files. Now in the era of cloud computing and service oriented architecture this is no longer an adequate solution, thus elastic scaling and service discovery are becoming the norm. One easy solution is to combine the powers of Docker, Consul and Registrator.
|
|
(16th April 2016)
|
Often I find myself having a SSH connection to a remote server, and I'd like to retrieve some files to my own machine. Common methods for this include Windows/Samba share, SSHFS and upload to cloud (which isn't trivial to do via plain cURL). Here an easy-to-use alternative is described: a single line command to load and run a docker image which contains a pre-configured Nginx instance. Then files can be accessed via plain HTTP at the user-assigned port (assuming firewall isn't blocking it).
|
|
(23rd December 2015)
|
Traditionally data scientists installed software packages directly to their machines, wrote code, trained models, saved results to local files and applied models to new data in batch processing style. New data-driven products require rapid development of new models, scalable training and easy integration to other aspects of the business. Here I am proposing one (perhaps already well-known) cloud-ready architecture to meet these requirements.
|
|
(10th April 2015)
|
Out of interest on nature observation, computer vision, image processing and so forth I developed an automated system to capture one photo / minute and store it on a disk. The project also has Bash and PHP scripts coordinating external tools such as montage for image stitching and mencoder for video generation. PHP also provides an HTTP API for image generation and file size statistics.
|
|
(1st September 2014)
|
Even in desktop applications it is becoming more and more common to provide a HTTP based APIs or full user interfaces. For example BitTorrent's μTorrent and BitTorrent Sync don't have any built-in UI, and instead users just head with their preferred internet browser to http://localhost:8080 or http://localhost:8888. However they typically lack HTTPS encryption and each port needs to be configured to the NAT router individually. This solution uses a Nginx instance on a virtual machine to provide a HTTPS reverse proxy to all these services in a single port under different sub-domains.
|
|
(17th July 2014)
|
In addition to a mirrored and check-summed ZFS based backup server, I wanted to have backups outside by premises to be safer against hazards such as burglary, fire and water damage. ZFS can already resist single disk failure and can repair silent data corruption, but for important memories that isn't sufficient level of protection. My ever-growing data set is currently 150k files, having a total size of 520 Gb. Amazon's Glacier seems to be the most cost efficient solution with sophisticated APIs and SDKs.
|
|