rclone and Amazon Cloud Drive Broken

So rClone and Amazon Cloud Drive was too good to be true. So good in fact that Amazon have pulled the access tokens that rClone uses and my dream of Linux based photos backups has gone. For now I’ll have to sync my photos using one of Amazons crappy crappy interfaces. The issue is here and the developer feedback. (Nb: Header image from pixabay.com.) »

rclone and Amazon Cloud Drive

I wanted to take a minute to post about the most useful tool I’ve stumbled across in recent months rclone. Rclone is a command line tool written in Go designed to synchronise file between local and remote systems. I’ll let you read the homepage to see the complete list of destinations, but the 2 that are instantly useful for me are ‘Amazon Cloud Drive’ (for personal projects) and Amazon S3 (for work). »

Operation Scan All The Things Complete

I am done. I have scanned some 6240 photos over the last 6 months, a mixture of 35mm, 110, 4x3” and slide film, not to mention a large quantity of printed pictures. Just over a 1000 images came from my grandparents photo collection, and the other 5000+ my parents. If I’m honest, it has been a slog; a slog I don’t wish to repeat. Constantly switching negatives over, adjusting images, trying to guesstimate when a photo might have been taken, trying to work out who is in what photo and if I’m feeling really pedantic, where the photo might have been taken. »

Operation Scan All The Things

So this weekend I started a new personal project, its not programming related, but it does involve a computer. A computer and what can only be defined as a “shit ton” of negatives. After playing with Google Photos, deciding that its the greatest thing since sliced bread and introducing it to my entire family; I’ve decided to archive my parents entire collection of film. Why Google photos Google Photos is an online service that provides unlimited photo and video storage (provided image are no more than 16MP and no videos no greater than 1080). »

NodeJS Scaping

Every programmer at some point in their career will need to scrape at least 1 webpage, guaranteed; it’s almost a rite of passage. Recently I’ve started a side-project that required data, data which unfortunately couldn’t be provided in a programmatic manner and needed extracting from 2 completely different websites. An annoyance? Maybe. An opportunity to tryout some different scraping frameworks, definitely! We wanted to do this scraping in a lean manner and we don’t anticipate using the final code in anger, so the search was limited to NodeJS and the following two libraries found: »