Latest Blog Posts

library_booksRead more at Medium


Configuring your Elastic Beanstalk App for SSL

It’s always a good idea to add a SSL certificate. It gives people piece of mind when visiting your site that information isn’t being accessed by third-parties and also boosts your SEO ranking in Google. Setting your Elastic Beanstalk app up for SSL isn’t too difficult and requires just a few simple steps.

Getting Started

I’m going to assume you have a domain already registered, either living in Route 53 or another domain provider. To start with, if you haven’t done so already, you’ll need to point your domain to your EB app. This can be done by creating an Alias A Record and setting it’s value equal to your EB App’s URL. This can be found on the Elastic Beanstalk Dashboard.

Elastic Beanstalk Management Console

In this example, it’s When adding this to your domain, ensure the A record is set to Yes for Alias. The value will then be your app’s Elastic Beanstalk URL.

Route 53 Configuration for an A Record

Now, if you visit your domain, in my case, you’ll see your app! You can try to type but as there’s no certificate, the request will timeout. So let’s add a certificate!

Configure your App

In order to use a SSL certificate for your Elastic Beanstalk App, you’ll need to change the configuration of your app to use Load Balancers as opposed to a single instance. This can cost more, so please check your billing dashboard to ensure you’re not going over budget or anything.

What are Application Load Balancers?

In essence, instead of running a single instance, a load balancer distributes traffic across multiple targets, instances, across multiple availability zones, which boosts availability of your app.

In our example the SSL certificate is applied to the load balancer, so connections between the Client and Load Balancer are secure and encrypted.

In order to configure your app, head to the Configuration tab of your Elastic Beanstalk dashboard and click the modify link on the Capacity card.

Configuration Tab of Elastic Beanstalk App — Capacity card located at the top right

Once here, the only thing I advise you to change is the maximum number of instances, from 4 to 1, however that’s up to you.

Don’t change anything else, just hit Save.

Adding a Load Balancer

This will then take you back to the configuration page, where you’ll need to hit Apply for your changes to take place. As your app will be unavailable for a short period whilst the changes take place, you’ll need to confirm again after hitting Apply.

Creating an SSL Certificate with ACM

Now we need to actually create our certificate. As we’re using Elastic Beanstalk, it makes sense to create a certificate in ACM (Amazon Certificate Manager).

In my case, I opted for a wildcard certificate for the domain This means all alias domains, i.e will be covered by the same SSL certificate. To do this, head over to ACM and request a certificate. Type in your domain, if you wish to setup a wild card add a * to the beginning of your domain.

AWS Certificate Manager Requesting a Certificate

You’ll have two options to validate that you’re the owner of the domain. DNS or Email. I chose DNS, but whatever you choose, just make sure you have access to the correct email domains if choosing email.

DNS Config for domain

To verify via DNS, you’ll need to add a CNAME record with whatever values are generated in your DNS_Configuration.csv file.

Enter the Name value from the .csv file and the Value from the .csv file and hit create.

This will take a little time to verify, but once done, your certificate should move from Pending to Issued.

Bringing it all together

Lastly, we need to apply our newly created SSL Certificate to our App’s Load Balancer. To do this, navigate to the Configuration Tab of your Elastic Beanstalk App. There should be a new card labelled Load Balancer.

Click modify on the Load Balancer card

In order to add the certificate, we’ll need to open up port 443 (SSL Port) and assign our certificate.

If your certificate doesn’t appear in the dropdown, try refreshing and waiting a bit. Once it does, choose it and hit save. Again, you’ll be directed back to the Configuration page where you’ll have to Apply your changes.

Once completeled, navigate to and you should see your site served through HTTPS.

Site Secured!

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

Node.js RESTful API with DynamoDB Local

Node is usually used along side MongoDB in the MEAN stack. However, using Amazon’s DynamoDB has it’s own benefits, not least from speed, scalability, affordability and freeing your time up from configuring database clusters/updates. Through this post I’ll discuss how to setup DynamoDB with your Node project locally.


Setting up your Node Project

To get things moving quickly, we’ll use the express generator to scaffold a project for us.

$ express node-dynamo-db

create : node-dynamo-db
create : node-dynamo-db/package.json
create : node-dynamo-db/app.js
create : node-dynamo-db/public
create : node-dynamo-db/routes
create : node-dynamo-db/routes/index.js
create : node-dynamo-db/routes/users.js
create : node-dynamo-db/views
create : node-dynamo-db/views/index.jade
create : node-dynamo-db/views/layout.jade
create : node-dynamo-db/views/error.jade
create : node-dynamo-db/bin
create : node-dynamo-db/bin/www
create : node-dynamo-db/public/javascripts
create : node-dynamo-db/public/images
create : node-dynamo-db/public/stylesheets
create : node-dynamo-db/public/stylesheets/style.css
install dependencies:
$ cd node-dynamo-db && npm install
run the app:
$ DEBUG=node-dynamo-db:* npm start
$ cd node-dynamo-db
$ npm install

Fire up your server to ensure it’s all working as intended.

$ npm start

Navigate to http://localhost:3000 and you’ll see the welcome page from express, like below.

Generic Express Welcome Page

Next, as there’s no live-reloading, we’ll install Nodemon to watch our files and whenever a change is made, it’ll restart the server for us. Without Nodemon, you’re gonna get frustrated real fast. Once installed, we’ll update our start command within the package.json to run the nodemon command as opposed to node.

$ npm install -g nodemon
"name": "node-dynamo-db",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "nodemon ./bin/www"
"dependencies": {
"body-parser": "~1.18.2",
"cookie-parser": "~1.4.3",
"debug": "~2.6.9",
"express": "~4.15.5",
"jade": "~1.11.0",
"morgan": "~1.9.0",
"serve-favicon": "~2.4.5"

Setting up DynamoDB

First download the file from the link above, unpack it and navigate into the directory. You’ll notice DynamoDB is provided as an executable .jar file. In order to start the database up, we need to run the following command within the directory the .jar file is located.

$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *

Boom, you’ve got a local instance of DynamoDB running! Problem is, unless you’re gifted with photographic memory, you’re probably not going to rememeber the above command and even if you do, it’s ballache to write out each time. Lets speed things up and create an alias command within our .bashrc or .zshrc, depending on what you use. Mine looks like this.

#bash .zshrc or .bashrc
alias ddb="cd path/to/dynamodb_local_latest && java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb"

I’ve named my alias ddb, it navigates to the directory and then executes the .jar, simple as that. Now when reloading my terminal window and running ddb, DynamoDB should spin up.

$ ddb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *

Now we’re all set to start creating our table and to begin seeding some data into our table. For the purpose of this demo, I’ll be making a database revolving around cars.

Before moving forward, let’s just update our package.json to automate some of the commands we’ll be running fairly frequently.

"name": "crafty-api",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "nodemon app.js",
"create-db": "cd dynamodb && node createCarsTable.js && cd ..",
"delete-db": "cd dynamodb && node deleteCarsTable.js && cd ..",
"load-data": "cd dynamodb && node loadCarData.js && cd ..",
"read-data": "cd dynamodb && node readDataTest.js && cd .."
"dependencies": {
"aws-sdk": "^2.176.0",
"body-parser": "~1.18.2",
"cookie-parser": "~1.4.3",
"cors": "^2.8.4",
"debug": "~2.6.9",
"ejs": "^2.5.7",
"express": "~4.15.5",
"jade": "~1.11.0",
"morgan": "~1.9.0",
"newman": "^3.9.1",
"node-uuid": "^1.4.8",
"serve-favicon": "~2.4.5",
"uuid": "^3.2.1"

This is what my current one looks like and it just speeds things up so much, so consider adding your own to speed up your workflow.

First things first, we’re gonna need to create a table and choose a partition key. Amazon provided pretty good advice here on what constitutes as a good key. Reason we need a key is because Dynamo DB partitions our data across multiple storage units and uses that key to both store and read the data. Therefore, the partition key must be a unique value. Good examples are user_ids and devices_ids.

For my table I’ve chosen car_id.

#JavaScript - createCarsTable.js
var AWS = require("aws-sdk");
region: "eu-west-2",
endpoint: "http://localhost:8000"
var dynamodb = new AWS.DynamoDB();
var params = {
TableName : "Cars",
KeySchema: [
{ AttributeName: "id", KeyType: "HASH"}, //Partition key
AttributeDefinitions: [
{ AttributeName: "id", AttributeType: "N" },
ProvisionedThroughput: {
ReadCapacityUnits: 5,
WriteCapacityUnits: 5
dynamodb.createTable(params, function(err, data) {
if (err) {
console.error("Unable to create table. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log("Created table. Table description JSON:", JSON.stringify(data, null, 2));

Now run you’re create-db command, making sure Dynamo DB is running in the background on another terminal window, on port 8000.

yarn create-db
yarn run v1.3.2
$ cd dynamodb && node createCarsTable.js && cd ..
Created table. Table description JSON: {
"TableDescription": {
"AttributeDefinitions": [
"AttributeName": "id",
"AttributeType": "N"
"TableName": "Cars",
"KeySchema": [
"AttributeName": "id",
"KeyType": "HASH"
"TableStatus": "ACTIVE",
"CreationDateTime": "2018-02-01T16:08:25.308Z",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1970-01-01T00:00:00.000Z",
"LastDecreaseDateTime": "1970-01-01T00:00:00.000Z",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/Cars"
✨ Done in 0.47s.

Now you’re table is setup and ready to seed data into.

In this example, we’re using Dynamo DB’s PutItem method to seed some data into our Database.

#JSON - carData.json 
{ "id": 1,
"type" : "Automatic",
"name" : "Toyota Yaris",
"manufacturer" : "Toyota",
"fuel_type" : "Petrol",
"description" : "A smooth ride"
{ "id": 2,
"type" : "Manual",
"name" : "Volkswagen Golf",
"manufacturer" : "Volkswagen",
"fuel_type" : "Petrol",
"description" : "Good Value"
#JavaScript - loadCarData.js
var AWS = require("aws-sdk");
var fs = require('fs');
region: "eu-west-2",
endpoint: "http://localhost:8000"
var docClient = new AWS.DynamoDB.DocumentClient();
console.log("Importing Cars into DynamoDB. Please wait.");
var cars = JSON.parse(fs.readFileSync('carData.json', 'utf8'));
cars.forEach(function(car) {
var params = {
TableName: "Cars",
Item: {
"type": car.type,
"manufacturer": car.manufacturer,
"fuel_type": car.fuel_type,
"description": car.description
docClient.put(params, function(err, data) {
if (err) {
console.error("Unable to add Car",, ". Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log("PutItem succeeded:",;

If you run your load-data command, it should seed in the two items in our carData.json file and log the names back in the console, like below.

yarn load-data
yarn run v1.3.2
$ cd dynamodb && node loadCarData.js && cd ..
Importing Cars into DynamoDB. Please wait.
{ id: 1,
type: 'Automatic',
name: 'Toyota Yaris',
manufacturer: 'Toyota',
fuel_type: 'Petrol',
description: 'A smooth ride' }
{ id: 2,
type: 'Manual',
name: 'Volkswagen Golf',
manufacturer: 'Volkswagen',
fuel_type: 'Petrol',
description: 'Good Value' }
PutItem succeeded: Toyota Yaris
PutItem succeeded: Volkswagen Golf
✨ Done in 0.46s.

Now our datas in there, but how do we know? Let’s run a quick test using Dynamo DBs DocumentClient .get method. DocumentClient is just a class that simplifies working with DynamoDB Items.

#JavaScript - readDataTest.js 
var AWS = require("aws-sdk");
region: "eu-west-2",
endpoint: "http://localhost:8000"
var docClient = new AWS.DynamoDB.DocumentClient()
var table = "Cars";
var id = 1;
var params = {
TableName: table,
"id": id
docClient.get(params, function(err, data) {
if (err) {
console.error("Unable to read item. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log("GetItem succeeded:", JSON.stringify(data, null, 2));

Remembering our JSON file, we should expect the Toyota Yaris to be returned to the console…

$ yarn read-data
yarn run v1.3.2
$ cd dynamodb && node readDataTest.js && cd ..
GetItem succeeded: {
"Item": {
"name": "Toyota Yaris",
"description": "A smooth ride",
"id": 1,
"type": "Automatic",
"fuel_type": "Petrol",
"manufacturer": "Toyota"
✨ Done in 0.56s.

BAM! DynamoDB is setup and seeded with data, now we just need to bring all the elements together.

Bringing it all together

At the moment, our Node backend isn’t actually talking to Dynamo DB at all, lets change that by incorporating some of the methods we’ve used above and create a route that returns all cars.

To do this we’re going to using DynamoDBs DocClient scan method.

#Javascript app.js
var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var AWS = require("aws-sdk");
var app = express();
app.listen(3000, () => console.log('Cars API listening on port 3000!'))
region: "eu-west-2",
endpoint: "http://localhost:8000"
var docClient = new AWS.DynamoDB.DocumentClient();
app.use(bodyParser.urlencoded({ extended: false }));
app.set('view engine', 'jade');
app.get('/', function (req, res) {
res.send({ title: "Cars API Entry Point" })
app.get('/cars', function (req, res) {
var params = {
TableName: "Cars",
ProjectionExpression: "#id, #name, #type, #manufacturer, #fuel_type, #description",
ExpressionAttributeNames: {
"#id": "id",
"#name": "name",
"#type": "type",
"#manufacturer": "manufacturer",
"#fuel_type": "fuel_type",
"#description": "description"
console.log("Scanning Cars table.");
docClient.scan(params, onScan);
function onScan(err, data) {
if (err) {
console.error("Unable to scan the table. Error JSON:", JSON.stringify(err, null, 2));
} else {
// print all the Cars
console.log("Scan succeeded.");
data.Items.forEach(function(car) {
console.log(, car.type,
if (typeof data.LastEvaluatedKey != "undefined") {
console.log("Scanning for more...");
params.ExclusiveStartKey = data.LastEvaluatedKey;
docClient.scan(params, onScan);

This is what you want your app.js file to look like. I know we can refactor this and move some code to the routes folder, however for the purposes of keeping this article as to the point as possible, I’ll leave that to you.

As the file shows, we create a new route called /cars and create a params variable, which contains the name of the table and what we want to be returned from our scan. We then create a function called onScan which sends our data to the client and logs our results to console. This also contains some error catching, should there be any issues with your request.

Now, if you navigate to http://localhost:3000/cars you should see something resembling the below.

#JSON - response from http://localhost:3000/cars
{"Items":[{"name":"Volkswagen Golf","description":"Good Value","id":2,"fuel_type":"Petrol","type":"Manual","manufacturer":"Volkswagen"},{"name":"Toyota Yaris","description":"A smooth ride","id":1,"fuel_type":"Petrol","type":"Automatic","manufacturer":"Toyota"}],"Count":2,"ScannedCount":2}

Great job! Now you’ve got building blocks of a Node.js RESTful API using AWS DynamoDB.

Let’s do one more route where we ask DynamoDB to return a car, by id.

Let’s call our route /cars/:id. We’ll pass the ID in via our request url. We’ll then use the ID to query the table and return us the correct car. We get the id value by slicing the string to return us only the number.

Remember, however, when we created our table we specified that the id was a number type. Therefore if we try to pass the value, as it is, to DynamoDB, it’ll spit back an error. We first need to convert our id value from string to integer using parseInt().

#JavaScript - app.js
app.get('/cars/:id', function (req, res) {
var carID = parseInt(req.url.slice(6));
var params = {
TableName : "Cars",
KeyConditionExpression: "#id = :id",
"#id": "id"
ExpressionAttributeValues: {
":id": carID
docClient.query(params, function(err, data) {
if (err) {
console.error("Unable to query. Error:", JSON.stringify(err, null, 2));
} else {
console.log("Query succeeded.");
data.Items.forEach(function(car) {
console.log(,, car.type);

We save our converted carID value in a variable and use this in our params object. We then use the query method to gather and return the data to the client. If all is setup correctly, you should be able to navigate to http://localhost:3000/cars/1 and see that the Yaris is returned as JSON. If you check your terminal you’ll see the id, name and type of the car queried.

#JSON - http://localhost:3000/cars/1
[{"name":"Toyota Yaris","description":"A smooth ride","id":1,"type":"Automatic","fuel_type":"Petrol","manufacturer":"Toyota"}]
$ yarn start
[nodemon] starting `node app.js`
Cars API listening on port 3000!
Query succeeded.
1 'Toyota Yaris' 'Automatic'
GET /cars/1 200 47.279 ms - 126

From here you can add additional routes to search by car name, car type and look to implement POSTing to the DB. Hint: this will be similar to our loadCarData.js file, using DynamoDB’s PutItem function.

Next time I’ll look to deploy our sample app to AWS Elastic Beanstalk along with AWS DynamoDB and implement a build pipeline with CircleCI and testing using Postman.

If you wish, you can check all the code out here, at the example Github Repo.

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

Node.js RESTful API with DynamoDB Local was originally published in Quick Code on Medium, where people are continuing the conversation by highlighting and responding to this story.

Automation With Cron

There’s been quite a few projects recently where I’ve found myself doing tedious things that can easily be automated. Enter Cron, the time-based job scheduler.

Once or twice a week I find myself updating/rebuilding my Jekyll Site because it displays my Medium RSS feed as a Blog page. As the site is static, I need to re-build it every time I publish something new. It’s not an issue as it’s only a few commands, but sometimes I forget and weeks can go by without updating my site. Writing a Cron job is so stupidly easy, you’ll find yourself automating pretty much every mundane task you do.

Writing a Cron Job

$ env EDITOR=nano crontab -e #opens nano in your terminal, with your crontab open. 

You should see something like this now, after running the above command.

Ok so not exactly like this, as this contains my cron job, but it should be a nice empty text file. For my purpose, I decided to create an executable file with the commands and then get cron to run that, however you can run the commands inside here, it’s just tidier to write a script for it.

Cron Time Intervals

Cron works by executing your commands at a determined time, whether that be weekly on a Monday at 2pm or on the 15th of each month. The * * * * * at the start each determine a different time scale.

You’ll notice in my example, I decided to execute every week on a Sunday.

Here’s another example if you want to run a job at 6:30 every Tuesday.

30 18 * * 2 yourcommand #execute every Tuesday at 18:30
45 07 10 * * yourcommand # execute every 10th day of the month at 07:45
20 15 * * 3 yourcommand # execute at 15:20 every Wednesday

Let’s make this a little easier and add some shortcut commands to our .bashrc or .zshrc (depending on what you’re using).

Using the alias command we can shortcut a long command to something a little easier to remember and type.

#.bashrc or .zshrc
alias newcron="env EDITOR=nano crontab -e" #open a new crontab
alias cronjobs="crontab -l" # list our cronjobs

Now you’re set to automate whatever you want!

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

Circle CI and DynamoDB

After searching around whilst building a Node.js API project, I realised there wasn’t too much documented on how to setup CircleCI with AWS DynamoDB for testing purposes within your build pipeline. I thought I’d do a quick post to summarise the process and hopefully make it clearer for others who are trying to achieve the same.

Getting Started

As I mentioned above, I was building a Node.js API, however this process should translate regardless of the language or tools you’re using. Before going any further, please make sure you’ve entered your AWS_ACCESS_KEY and AWS_SECRET_KEY into your CircleCI Project. This can be done from the setting page of your Project, under the Permissions heading.

Hit the AWS Permissions link and enter your variables

Now we’re going to need to edit our circleci config.yml file. The below is my finished version. I know it’s pretty long and messy and can definitely be refactored, but it does the job for now.

version: 2
- master # list of branches to build
- image: circleci/node
working_directory: ~/repo
- checkout
- run:
name: Install Java
command: 'sudo apt-get update && sudo apt-get install default-jre default-jdk'
- run:
name: Install Python
command: 'sudo apt-get update && sudo apt-get install -y python-dev'
- run:
name: Install Python
command: 'sudo curl -O'
- run:
name: Install Python
command: 'sudo python'
- run:
name: Install AWS CLI
command: 'sudo pip install awsebcli --upgrade'
- run:
name: Setup Container
command: |
curl -k -L -o dynamodb-local.tgz
tar -xzf dynamodb-local.tgz
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
background: true
- run:
name: Update yarn
command: 'yarn global add npm@latest'
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install Dependencies
command: yarn install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
- node_modules
- run:
name: Start Server
command: 'yarn ci-start'
background: true
- run:
name: Create Table
command: 'yarn create-db'
- run:
name: Load Data
command: 'yarn load-data'
- run:
name: Run Tests
command: 'yarn test'
- run:
name: Deploy to AWS Elastic Beanstalk
command: 'eb init MyApp -r eu-west-2 -p arn:aws:elasticbeanstalk:eu-west-2::platform/Node.js running on
64bit Amazon Linux/4.4.3'
- run:
name: Deploy to AWS Elastic Beanstalk
command: 'eb deploy your-env'

First up we install Java as DynamoDB requires Java to run. The next part is optional, but as my app deploys to Elastic Beanstalk, I download Python as AWS EB CLI requires Python to run. Lastly we install DynamoDB directly from Amazon, I’ve chosen eu-west-2, but choose whichever location is nearest to you. This downloads as a zip, so we unzip it and then run the .jar file. The important thing here is to note the use of the option background: true. This ensures it runs in the background and doesn’t stall your build from going onto the next stage. From here, you can launch your server as a background task, load your data in and run your tests.

Hope you this helped anyone having trouble incorporating DynamoDB into their build pipelines. If you’re stuck, or have any questions, please ask!

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

Thanks for taking the time to read through, glad it helped!

Thanks for taking the time to read through, glad it helped! If you ever need a hand with anything just give me a shout and I’ll be happy help!

Creating a Google Chrome Extension

I recently created a simple Chrome Extension (CrypCheck) that displays the current, live, price of some of the popular Crypto Currencies. After going through the process, it’s a lot quicker and easier than you think to write, test and publish your own Chrome Extension.

Firstly, setup a working directory and push it up to Github (or whatever else you use).

$ mkdir my_awesome_chrome_extension
$ cd my_awesome_chrome_extension
$ git init
$ echo "#my_awesome_chrome_extension" >>
$ git add .
$ git commit -m 'first commit, setting up project'
$ git remote add origin yourremote.git
$ git push origin master

Next, we’re gonna want to download the starter files that Google provide in their docs. Click here and go to the Resources heading. You should end up with four files downloaded, manifest.json, popup.html, popup.js and icon.png (this is optional and can be replaced with any icon of your choice).

Now we’re going to want to load the extension in so we can test it locally. Navigate to the URL chrome://extensions/. From here, you should see a button at the top labelled Load unpacked extension. After clicking it, navigate to the working directory of your plugin and select it. This loads the Plug-In into your browser from the directory. You should be able to play with the sample app now, which allows you to change colours. Now you’re ready to build and test your app!

Remember, if you require any third party libraries like jQuery, you’re going to need to load them in. Also, make sure to edit the manifest.json to reflect details of your own app, as opposed to the sample app. This is an example of what my manifest.json looks like.

"manifest_version": 2,
"name": "CrypCheck",
"description": "This extension allows the user to check the price of Bitcoin, Bitcoin Cash, Ethereum, Litecoin, Ripple and IOTA.",
"version": "2.0",
"browser_action": {
"default_icon": "icon.png",
"default_popup": "popup.html"
"permissions": [

Depending on the complexity of your project, you might also want to setup a structure to it. Especially if you find yourself loading in fonts, libraies and anything else, below is the folder structure of my app.

One thing to note, regarding the icon.png size, is that Google requires the size to be 128x128 pixels. You can also provide additional sizes of 48x48 and 16x16.

Publishing your extension

After completing your extension and making sure it works, you’re gonna want to share it with the world. Head over to Chrome’s Web Dashboard and Login with your Google Account. Once the page loads, hit the Add new Item button. This will prompt you to upload a zip of your project, which can be done by running the following command.

$ zip -r my_awesome_chrome_extension

Once the zip file uploads you can edit the details before publishing. This includes where you want to distribute the app, the category you want it to appear in and all the other details regarding it’s publication.

Once you’re finished tweaking hit Publish Changes. Congrats! You’ve published a Chrome Extension! Make sure to delete the one in your browser and re-download it from the Web Store, so you’re running the production version.

If you’re interested, you can take a look at my app’s repo as an example.

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

React with CircleCI, AWS S3 and AWS CloudFront

Today we’re going to be whipping up a simple React Project with a build pipeline that deploys to an S3 bucket, which is distributed through CloudFront. The benefits of deploying your React App this way are that you can automate your build and deployment tasks and by distributing your app across CloudFront you’ll be able to provision a free SSL certificate on it, which is great!


Getting Started

First of all, let’s scaffold an app using create-react-app.

$ create-react-app myawesomeapp
Success! Created myawesomapp at /your/path/myawesomapp
Inside that directory, you can run several commands:
yarn start
Starts the development server.
yarn build
Bundles the app into static files for production.
yarn test
Starts the test runner.
yarn eject
Removes this tool and copies build dependencies, configuration files
and scripts into the app directory. If you do this, you can’t go back!
We suggest that you begin by typing:
cd myawesomapp
yarn start
Happy hacking!

Now, setup your repo on Github and push your code up.

$ git add .
$ git commit -m 'first commit, scaffolds project with create-react-app'
$ git remote add origin https://yourrepo
$ git push -u origin master

Now you should have your initial project up on Github. From here we’ll checkout feature branches and merge them in as we complete them. You can use a project management tool like Waffle or Trello if you like, it helps keep track of what needs to be done. Depending on the complexity of your project, you could also checkout a staging branch and merge your features into that, bit by bit, but for the purposes of this we’ll stick with merging into master. The process is identical though, it’ll just require a bit more configuration.

$ git checkout -b 1-setting-up-build-pipeline
$ git push origin 1-setting-up-build-pipeline

Setting up AWS

Head over to AWS and go to the S3 dashboard. Here we’ll create a bucket and set it’s permissions to public.

s3 dashboard

Click Create bucket and call your bucket something useful, like the name of your app or the domain it’ll live on, if you’ve bought one. I called mine Now we’ll quickly configure the permissions of the bucket and set it to public so that it can be accessed and viewed on a browser, by the general public. Click the Permissions tab and then the Bucket Policy button, this will bring up an editor.

AWS uses JSON policy files to manage permissions of buckets and other services, the example below allows the public to Read the contents of the S3 bucket.

"Version": "2012-10-17",
"Statement": [
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "*"

You might be tempted to change the date, but don’t. It’s taken from their docs and is linked to a particular version, changing it may break/alter your permissions.

Bucket Policy

Lastly, you’ll need to configure your S3 bucket to host static sites — this is literally just clicking a few buttons.

Head over to the Properties tab and click the Static Website Hosting card. You’ll need to input the entry page, which in our case will be index.html, you can also create an error page, but we’ll leave that blank for now. Hit Save and we’re ready to go!

Using AWS CLI, we’re gonna sync our project with our bucket. Make sure you’ve got your Access Keys setup, if not just type aws configure in your shell and enter them there. These can be found/generated in My Security Credentials.

Before syncing, we’ll need to build our project for production. We’ll then sync the contents of the build folder with our S3 bucket and BAM we’re live.

$ yarn build
yarn run v1.3.2
$ react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
35.65 KB  build/static/js/main.35d639b7.js
299 B build/static/css/main.c17080f1.css
The project was built assuming it is hosted at the server root.
To override this, specify the homepage in your package.json.
For example, add this to build it for GitHub Pages:
"homepage" : "",
The build folder is ready to be deployed.
You may serve it with a static server:
yarn global add serve
serve -s build
✨  Done in 7.49s.
$ aws s3 sync build/ s3:// --delete
upload: build/service-worker.js to s3://
upload: build/manifest.json to s3://
upload: build/favicon.ico to s3://
upload: build/index.html to s3://
upload: build/static/css/ to s3://
upload: build/asset-manifest.json to s3://
upload: build/static/media/logo.5d5d9eef.svg to s3://
upload: build/static/css/main.c17080f1.css to s3://
upload: build/static/js/main.35d639b7.js to s3://
upload: build/static/js/ to s3://

Now head back to your S3 bucket, under the Static Website Hosting card and click the Endpoint URL. If all is good, you’ll see the React Welcome Page.

Your app’s now on S3! We can somewhat automate this further by combining the commands. Add the following to your package.json under the scripts section.

"name": "myawesomapp",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.2.0",
"react-dom": "^16.2.0",
"react-scripts": "1.0.17"
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject",
"deploy": "yarn build && aws s3 sync build/ s3:// --delete"

This runs yarn build and syncs the contents of your build/ folder with your S3 bucket.

CircleCI Setup

Firstly, add your project by clicking Setup Project on the CircleCI Dashboard.

Next you’ll want to make sure you have CircleCI 2.0 as opposed to 1.0, which is a bit older. As we’re creating a React Project, we want our container to be Preconfigured with Node.

Follow the instructions laid out on the CircleCI Dashboard. Your config.yml file inside your .circleci folder, should look something like this.

# Javascript Node CircleCI 2.0 configuration file
# Check for more details
version: 2
# specify the version you desire here
- image: circleci/node:7.10
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at
# - image: circleci/mongo:3.4.4
working_directory: ~/repo
- checkout
# Download and cache dependencies
- restore_cache:
- v1-dependencies-{{ checksum "package.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: yarn install
- save_cache:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
# run tests!
- run: yarn test
- run: sudo apt-get update && sudo apt-get install -y python-dev
- run: sudo curl -O
- run: sudo python
- run: sudo pip install awscli --upgrade
- run: aws --version
- run: aws s3 ls
- run: yarn run deploy

You’ll notice we’ve added a few extra things. We install python and download AWS CLI. After that we check installed by checking the version and list our S3 Buckets.

Once this is done, head back to the CircleCI dashboard and hit Start Building. From here you can navigate to your Project Settings and save your AWS_ACCESS_KEY_ID and your AWS_SECRET_ACCESS_KEY.

Once this is done, push your changes up to github and open a pull request. On the first Pull Request CircleCI won’t register, so just hit merge.

You’re pretty much set now!

Checkout another branch for your next feature and once your ready to open a pull-request you’ll notice CircleCI will run your tests and if all passes and you merge, it’ll deploy!

Congrats! You’ve setup a React App with a build pipeline, hosted on AWS!

Deploying your App Through AWS CloudFront

Now you’ve got a build pipeline setup and synced to deploy with your AWS Bucket, which is great. But it’d be even better if you distributed that across a CDN and slapped a SSL certificate on it. Enter CloudFront…

Navigate to CloudFront’s dashboard and hit Create Distribution.

You’ll then be given two options, as we’re creating a Web App, you’ll need to select Web.

On the next screen, in the Origin Domain Name field, you’ll get a dropdown list. Choose the name of your S3 bucket. You’ll also want to change the Viewer Protocol Policy to Redirect HTTP to HTTPS. Lastly you’ll need to set the Default Root Object to index.html as that’s the entry page to our app.

Once you’re done hit Create Distribution. This will take a short while to deploy, but once it’s complete head over to the Distribution URL and you should see your app, as well as an SSL certificate.

You’re done! If you own your own domain, and it’s hosted in AWS Route 53, you can create an Alias record to point to the Distribution URL.

If you have any questions, or need a hand with anything, drop a comment and I’d be happy to help! Here’s a link to the example repo.

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

React with CircleCI, AWS S3 and AWS CloudFront was originally published in CloudBoost on Medium, where people are continuing the conversation by highlighting and responding to this story.

Net Neutrality

With America gearing up for a big vote on Net Neutrality I thought I’d do a repost, to emphasise how important it is and what it could mean for the UK.

I’ve done a previous post covering this issue, but in a nutshell Net Neutrality promotes a free and open internet where all data, content and applications are treated equally, without discrimination. Essentially everything carrying on like it currently is.

This image explains all you need to know about why Net Neutrality is important, because this is what it could turn out like.

Net Neutrality

An ISP could prioritise traffic however they wish and charge you for the luxury of using different applications. Whereas now you’d pay a flat fee for internet access, ISPs could impose, as the image suggests, multiple packages for different websites. It doesn’t take a genius to figure out this is negative for everyone. It’s anti-competition, anti-freedom, anti-everything. There’s no benefit, except to those right at the top.

But this is in the US, so why should I care in the UK?

For now, we’re protected by EU Law, which ensures an open and competitive internet market. However, with Brexit looming and all the other issues surrounding it, you might worry that this particular law isn’t translated correctly on the repeal bill.

To be fair, in the UK, we have a much more competitive broadband market, and it allows users to switch with ease. However, a lot of these providers also tend to offer a broadband and online tv deals — think Sky and BT, for example. These companies have incentive to prioritise their content over Netflix, Prime or any other streaming service.

Virgin already offer data-free messaging, to Facebook Messenger, WhatsApp and Twitter. Whilst this sounds amazing, it’s bad for everyone in the long term. It’s anti-competition and makes it very difficult for new startups, in a similar sector, break into the market and gain a user base. Why would a user want to try out a cool new product, if the alternative is free to use. It creates an unfair playing field, which leads to a stagnant market and less innovation.

Whilst things are better here in the UK than the US, the fact remains it’s an issue to keep in mind, especially with Brexit on the horizon. If the vote passes and the bill is repealed in the US it might pressure other countries to revisit their own Open Internet laws.

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

Ethereum and Blockchain Technology

After the crazy highs of Bitcoin recently and it’s high volatility, people are getting more interested in cryptocurrencies and how they work.

What is a Cryptocurrency?

Bitcoin, Ethereum, Litecoin are all examples of Cryptocurrencies, they are all decentralized. This means no central bank or body regulates, monitors or owns them. They work on a peer to peer network, meaning users directly connect to each other and make transactions through the use of cryptography — making hacking or fraud virtually impossible. Once the transactions are verified by the network, they are added to a shared public ledger — the blockchain.

The Blockchain

Blockchain Diagram

As the digram illustrates, a transaction is requested, the network of nodes (computers) is notified, at which point it validates the transaction request. Once verified, the transaction gets added to the public ledger — the blockchain. Essentially the blockchain is just a giant immutable data structure, with each event/transaction/contract getting added to the end of it. If you’re curious, you can view live transactions for Bitcoin being added to the blockchain here.

Types of blockchain

There are a few different types of blockchain.

Blockchains are similar in theory, but their approach to tasks can be slightly different. Bitcoin looks to work as a currency, where you can transfer value directly to the recipient without any middle man (bank). Ethereum, instead, offers a much more powerful solution that allows developers to create applications utilising Blockchain technology, through Smart Contracts and Solidity.

What is a Smart Contract?

A smart contract is a program that is executed exactly the way it is set up to by their creators. A developer could write a program and deploy it, without fear of fraud, third party interference and benefit from 100% uptime. Keeping with the Bitcoin examples, a simple smart contract transfers the value from one user to another, if the necessary conditions are met. Here, we’re limited to currency though. This is where Etherum is different. It replaces Bitcoin’s somewhat restrictive scripting language with it’s own called Solidity, allowing developers to build and deploy applications with it.

What is Solidity?

Solidity is a contract-orientated, high-level language for implementing smart contracts. It was influenced by C++, Python and JavaScript and is designed to target the Ethereum Virtual Machine (EVM). It’s statically typed, meaning that the type of variable is known at compile time, as opposed to run time. This has some benefits in that type errors are picked up earlier in the development cycle and it can lead to faster programs because the compiler can produce optimised machine code if it knows the variables earlier. Having looked through some code examples, visually, it looks closest to something resembling JavaScript — but it’s not.

What can I build?

Anything you like! The poster example apps are voting , blind crowdfunding and multi-signature wallets. Ethereum is still young and obviously some of the tech is still in Beta, but there seems to be a curiosity in building these new types of applications.


Blockchain technology seems to be gathering pace, mainly through Bitcoin, but as organisations learn it’s benefits and the power of the technology, it could become a new way to develop applications. It seems now is a good time to start experimenting with it and seeing what real-world applications it could have.

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

Ethereum and Blockchain Technology was originally published in Cryptocurrency Hub on Medium, where people are continuing the conversation by highlighting and responding to this story.

Creating your own Jekyll Theme Gem

After searching for a short while, I found I couldn’t quite find a Jekyll Theme that I liked. All the ones that I came across needed a lot of work, so I thought I’d whip up my own theme and make it a gem. It’s a lot quicker and easier than you think.

For my theme, I used Materialize — a front-end framework based on Material Design.

Getting Started

Firstly, head over to RubyGems and sign up for an account — you’ll need these credentials later when you push your gem up.

Jekyll already contains a new-theme command which scaffolds together a skeleton for you. It’ll look something like this.

# bash
$ jekyll new-theme testing123
create   /Users/jameshamann/Documents/Development/testing123/_layouts/page.html
create /Users/jameshamann/Documents/Development/testing123/_layouts/default.html
create /Users/jameshamann/Documents/Development/testing123/Gemfile

create /Users/jameshamann/Documents/Development/testing123/
create /Users/jameshamann/Documents/Development/testing123/LICENSE.txt

initialize /Users/jameshamann/Documents/Development/testing123/.git
create /Users/jameshamann/Documents/Development/testing123/.gitignore
Your new Jekyll theme, testing123, is ready for you in   /Users/jameshamann/Documents/Development/testing123!
For help getting started, read /Users/jameshamann/Documents/Development/testing123/

It provides a nice starter README, which explains the setup and what the theme includes. As well as this, the command creates a .gemspec file, which contains all the information and build instructions for your gem.

# ruby 
# coding: utf-8 do |spec| = "testing123"
spec.version = "0.1.0"
spec.authors = [""] = [""]
spec.summary       = %q{TODO: Write a short summary, because Rubygems requires one.}
spec.homepage = "TODO: Put your gem's website or public repo URL here."
spec.license = "MIT"
spec.files         = `git ls-files -z`.split("\x0").select { |f| f.match(%r{^(assets|_layouts|_includes|_sass|LICENSE|README)}i) }
spec.add_runtime_dependency "jekyll", "~> 3.6"
spec.add_development_dependency "bundler", "~> 1.12"
spec.add_development_dependency "rake", "~> 10.0"

When you’re done with your theme, you’ll want to go in here and edit the details at the top, so once your Gem’s live, all the necessary information is available.

The site itself functions the same as a jekyll site, so when you’re developing you can use jekyll serve to boot up your site on a server, that way you can view and test your site whilst you’re developing your theme.

Testing your Gem

To test your gem, let’s build it and load it on another jekyll site.

# bash
$ gem build YOURTHEME.gemspec

This will generate a gem file within your directory, however it’ll be hidden as it’s part of your .gitignore file. Next, generate a new jekyll site, add your gem to the gemfile (specificying it’s path), bundle install, change the _config.yml to use your theme and then jekyll serve. This should serve up your new site, using your gem as it’s theme.

# bash 
$ jekyll new mysite
  Bundler: Fetching gem metadata from
Bundler: Fetching gem metadata from
Bundler: Resolving dependencies...
Bundler: Using public_suffix 3.0.1
Bundler: Using addressable 2.5.2
Bundler: Using bundler 1.16.0.pre.3
Bundler: Using colorator 1.1.0
Bundler: Using ffi 1.9.18
Bundler: Using forwardable-extended 2.6.0
Bundler: Using rb-fsevent 0.10.2
Bundler: Using rb-inotify 0.9.10
Bundler: Using sass-listen 4.0.0
Bundler: Using sass 3.5.3
Bundler: Using jekyll-sass-converter 1.5.1
Bundler: Using ruby_dep 1.5.0
Bundler: Using listen 3.1.5
Bundler: Using jekyll-watch 1.5.1
Bundler: Using kramdown 1.16.2
Bundler: Using liquid 4.0.0
Bundler: Using mercenary 0.3.6
Bundler: Using pathutil 0.16.0
Bundler: Using rouge 2.2.1
Bundler: Using safe_yaml 1.0.4
Bundler: Using jekyll 3.6.2
Bundler: Using jekyll-feed 0.9.2
Bundler: Using minima 2.1.1
Bundler: Bundle complete! 4 Gemfile dependencies, 23 gems now installed.
Bundler: Use `bundle info [gemname]` to see where a bundled gem is installed.
New jekyll site installed in /Users/jameshamann/Documents/Development/mysite.
$ cd mysite
$ atom .
# ruby
# Gemfile
gem "YOURTHEME" => :path => "path/to/your/gem"
# bash
$ bundle
Fetching gem metadata from
Fetching gem metadata from
Resolving dependencies...
Using public_suffix 3.0.1
Using addressable 2.5.2
Using bundler 1.16.0.pre.3
Using colorator 1.1.0
Using ffi 1.9.18
Using forwardable-extended 2.6.0
Using rb-fsevent 0.10.2
Using rb-inotify 0.9.10
Using sass-listen 4.0.0
Using sass 3.5.3
Using jekyll-sass-converter 1.5.0
Using listen 3.0.8
Using jekyll-watch 1.5.0
Using kramdown 1.16.2
Using liquid 4.0.0
Using mercenary 0.3.6
Using pathutil 0.16.0
Using rouge 2.2.1
Using safe_yaml 1.0.4
Using jekyll 3.6.2
Using jekyll-feed 0.9.2
Using jekyll-material-theme 0.1.0 from source at `../material-theme`
Bundle complete! 4 Gemfile dependencies, 22 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
# _config.yml
# bash
$ jekyll serve
Configuration file: /Users/jameshamann/Documents/Development/mysite1234/_config.yml
Source: /Users/jameshamann/Documents/Development/mysite1234
Destination: /Users/jameshamann/Documents/Development/mysite1234/_site
Incremental build: disabled. Enable with --incremental
done in 0.436 seconds.
Auto-regeneration: enabled for '/Users/jameshamann/Documents/Development/mysite1234'
Server address:
Server running... press ctrl-c to stop.

Head over to http://localhost:4000 and you should be able to see your site, using your gem theme.

Going Live

Once you’ve styled, created and tested your Jekyll theme, it’s time to go live! Once you’ve edited your .gemspec file and made sure all the necessary files are included, use the build command to build the first version of your gem. Ruby Gems use Semantic Versioning so your first push might not be your major release, so it defaults to version 0.1.0.

Briefly, Semantic Versioning works by incrementing the numbers based on MAJOR.MINOR.PATCH releases. MAJOR version, as the word suggests, is a major release where you make incompatible API changes. MINOR version is adding functionality in a backwards compatibility manner. PATCH version is for any bug fixes. It’s best practice to follow these guidelines when releasing/updating your gem, so keep that in mind during future development if you further tweak your theme.

# bash
$ gem build YOURTHEME.gemspec
$ gem push YOURTHEME.gem

This is where you’ll need your login details you created earlier. Once filled in, head over to RubyGems and search for your gem. It should appear in the list of results, go ahead and view the page to ensure all the details are correct. If you make a mistake, don’t worry you can pull it off using a simple command.

# bash 
$ gem yank YOURTHEME

Congrats, you’ve just published a gem! You can also add your theme to various jekyll theme sites, most of them require you to fork the repo and open a pull request with a new post about your theme.

As always, thanks for reading, hit 👏 if you like what you read and be sure to follow to keep up to date with future posts.

library_booksRead more at Medium