Hacker Noon

High-quality pdf generation using puppeteer

Photo by rawpixel on Unsplash

And how I used it to make a resume!


I was looking for a free resume builder. Though I was using Novoresume free scheme before, it was not enough as I couldn’t customize most of the things. Also, their paid schemes are pretty expensive. So, as a web developer, I just coded my own resume builder with HTML + CSS 🎉

Now, as I wanted to print the resume as PDF I faced a challenge. Using Google Chrome’s default print command on any web page generated not so high-quality pdf as the dpi is low.

Searching through pdf generators in JS I couldn’t find any good source which could generate accurate pdf to my HTML with links, images, styling, etc. Puppeteer came to the rescue. It can take a snapshot or generate the exact pdf output of any webpage !!!

Let’s make a webpage

This is a sample resume I made with HTML + CSS which I used ReactJS to build as it gives me more power to use js for repeating patterns and component architecture. But, a plain .html file is also fine for the purpose.

If we use the chrome print command (`CTRL+P`) to print this page and save as pdf, we’ll see that the pdf is not high quality! It’s because of Chrome’s default resolution and 72dpi setup for printing.

Using Puppeteer

  • We need NodeJS installed on our machine.

Let’s see both react and non-react version


Suppose our resume page is running on port 3000 and available at http://localhost:3000.

Now, as we’re already inside a react project and we have a package.json file in our project, let’s just install puppeteer first with the command below –

npm install --save puppeteer

Line 1: We’re loading puppeteer.

Line 4: Initializing a browser (we can think this as opening a new chrome window).

Line 5: Opening a new page (we can think this as opening a new chrome tab).

Line 6: Setting the viewport’s width and height. deviceScaleFactor is by default 1, we’re doubling it so that the quality of the output gets better.

Line 7: Visit localhost:3000, wait until everything is loaded properly with networkidle2

Line 8–13: Generate a .pdf file named resume.pdf with A4 paper size in the current directory. Set printBackground to true to print background colors and styling. pageRanges makes sure we’re getting only the first page (we could avoid it if we had a multipage resume).

Line 15: Closing the browser as the work has been finished.

Now, let’s create a file named generatePDF.js with the code above, in our project’s root folder and run the command below –

node generatePDF.js

This will generate a resume.pdf file in the same directory. Now if we open up the pdf file with a pdf viewer, we can see that the pdf is better in quality than the directly printed one from the browser (ctrl+p).

Using Plain HTML file

We need to change only Line 7 from the ReactJS version

await page.goto("file://practice/resumeapp/resume.html", { waitUntil: "networkidle2" });

But, as we have no node/react project setup for this plain HTML version let’s just create one.

mkdir resumeapp
cd resumeapp

This will create a directory named resumeapp and take you into the directory.

npm init --yes

This will initialize a package.json file in your directory. Now let’s install puppeteer –

npm install --save puppeteer

Now let’s create a file named generatePDF.js with the code below –

Make sure your resume.html file’s path is correct. Now, run the command below in the same shell/directory.

node generatePDF.js

This will generate a resume.pdf file in the same directory. Now if we open up the pdf file with a pdf viewer, we can see that the pdf is better in quality than the directly printed one from the browser (ctrl+p).

This pdf generation trick can be pretty handy as using puppeteer enables us to use chrome’s awesome features in the backend. We can take screenshots, make books from crawled data, and what not! The possibilities are endless and that’s for you to explore.

Good Luck 🙂

High-quality pdf generation using puppeteer was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.


Don’t be billions-wise, trillions-foolish.SourceThis is NOT the Olympics. This isn’t about taxation. This isn’t even about subsidies.How do you think great startups are founded??Great Startups are started by people who leave successful companies to sta…

Breaking Down JPM Coin

Loathed by some, this asset is a consequential stepping stone

Is JPM Coin just Schrute Bucks or the new Gold Standard?

Since the announcement of JPM Coin, I have seen the full spectrum of reactions. From comparing the coin to Schrute Bucks 💵 or V-Bucks (the Fortnite videogame currency) to claims that JPM Coin will become the “gold standard” of cryptocurrency because “The Man” has taken the best part of decentralized technologies and internalized it for their own gain. It quickly became challenging to discern facts from reactionary opinions meant to guard a previously held position, so I decided to go do some digging of my own. I’ve read through J.P. Morgan Chase’s statements on JPM Coin, dove into the Quorum Protocol Whitepaper (the blockchain protocol on which the JPM Coin will initially reside) and investigated the J.P. Morgan GitHub for clues on potential next steps.

What is JPM Coin

To start, let’s make sure we are clear on the type of digital asset JPM Coin represents. JPM Coin is known as a stablecoin, a digital coin which is pegged to the USD, 1:1. Most stablecoins, including JPM Coin, maintain their value based on the premise that for each stablecoin released, there is an associated USD collateralized. There are currently several such stablecoins in circulation (Tether, Gemini, Circle, etc). JPM Coin is an interesting new entrant because it is backed by the largest U.S. bank and a strong balance sheet, providing an unprecedented level of credibility to the USD-collateralized stablecoin. (Note: there are also stablecoins which are crypto-collateralized and non-collateralized.)

A major point of contention surrounding JPM Coin is rooted in the definition of cryptocurrency. Some have called JPM Coin a cryptocurrency, which is incorrect… mostly. There is a lack of clarity around this point because “cryptocurrency” has many definitions. Looser definitions consider any digital representation of a tradeable asset a cryptocurrency (such as V-Bucks, the Fortnite digital videogame currency purchased with real fiat currency). Under this definition, JPM Coin is a cryptocurrency. But, under the more specific definition, the one used by computer scientists, engineers, and experts in the DLT community, JPM Coin is not a cryptocurrency. Under this definition, a cryptocurrency is a decentralized digital currency in which encryption techniques are used to regulate the generation of units of currency (e.g. via Proof of Work) and verify the transfer of funds independently of a central bank. They are integral to a system of gamified security on a public, distributed blockchain.

Based on this definition, JPM Coin clearly isn’t a cryptocurrency, and that is OKAY. J.P. Morgan Chase has long held cryptocurrencies at arms-length while being optimistic about blockchain technology, and JPM Coin is no deviation from this stance.

How it Works

Simply because JPM Coin isn’t a traditional cryptocurrency, doesn’t mean it is void of interesting functionality and use cases within the J.P. Morgan environment. To start, JPM Coin will be trialed with their wholesale banking clients. Here is how JPM Coin works as a stablecoin at a high level. A client converts fiat currency (USD) to tokenized JPM Coins which reside on the Quorum blockchain. The JPM Coins are then transferred to another JPM Client at which point they are converted back into fiat currency, and the JPM Coins are destroyed.

Source: JP Morgan JPM Coin Q&A

JPM Coin enables real-time gross settlements (RTGS) between institutional clients. The current process for settling and clearing payments requires several transactions between a multitude of banking networks. This process is clearly inefficient and worse, is susceptible to fraud and risk of server outages or attacks which could cripple a bank’s clearing systems. By tokenizing fiat currency, JPM Coin cuts the wait time for finality of payment from approximately 2 days to mere seconds, doing so securely and confidentially.

This was the goal of JPM Coin. To reducing clients’ counterparty and settlement risk, decreasing capital requirements and enable instant value transfer, and they chose JPM Coin and blockchain for the job. Several articles have called out JPM Coin and the use of blockchain as unnecessary. They argue that any protocol connecting internal databases could have done the job. These comments miss the nuance of the opportunity presented by private blockchains for large financial institutions. As Umar Farooq, the head of Digital Treasury Services and Blockchain for J.P. Morgan Chase stated:

The applications are frankly quite endless; anything where you have a distributed ledger which involves corporations or institutions can use this… Looking further out, the JPM coin could be used for payments on internet-connected devices if that use for blockchain catches on.

The Distributed Ledger

JPM Coin wouldn’t be possible without a distributed ledger on which to transact and that ledger is called Quorum: J.P. Morgan’s project to create a private blockchain forked from the official Go-Ethereum blockchain, with additional data privacy and improved transactional efficiency. The below image provides some of the key aspects of the Quorum platform and architecture:

Source: Quorum White Paper

A public blockchain like Ethereum wouldn’t suffice for a few reasons: public chains don’t provide the level of data privacy required for wholesale banking and the transaction speeds on public chains are currently notoriously slow (although Ethereum is actively rolling out solutions to improve performance). Quorum was designed to address these key needs; however, the JP Morgan team decided to utilize Ethereum as the core architecture of the blockchain:

The Quorum design outlined in this paper demonstrates how the seminal Ethereum platform can be extended to meet the key requirements of enterprise and in particular, financial institutions in preserving privacy in a distributed, decentralized blockchain consortium network. By building on Ethereum, Quorum inherits the maturity of the production hardened go-ethereum code base…

Quorum has implemented some unique architecture to enable private transactions and ensure consensus is efficiently reached amongst the members of the private blockchain.

Data Privacy On-Chain

Quorum supports both public and private transactions. The public transactions are managed much as they would be on the Ethereum blockchain. Each Quorum node maintains the public state stored in a Patricia-Merkle tree to ensure that everyone is in agreement of the transactions which have occurred. When a private transaction occurs, every Quorum node receives a hash of the encrypted private transaction data, but only the parties involved in the private transaction can see the details of the transaction. As a result, the private state of each Quorum node may vary, and as such the private state is preserved in a separate Patricia-Merkle tree in each node. The validity of the private state of each node can be confirmed by running the storageRoot RPC API to reach private state consensus. This is, however, an additional feature since private consensus can be verified via public consensus as the hash of the encrypted private transaction is included in the public state hash. Any modification to private transactions would result in a different public state, placing the node out of consensus.

Quorum has implemented private transactions by modifying the geth sendTransaction command which is part of the standard go-ethereum implementation. They have added a “privateFor” field which receives the address of the bank or institution which will be receiving the funds and makes the entirety of the transaction private. The sendTransaction command also has one additional field for the hash of the encrypted private transaction data (only for transactions which are private). The remainder of the sendTransaction command remains the same.

If you want to dive into the details, which I won’t cover in this article, I highly encourage you to read the Quorum WhitePaper here.

Potential Next Steps

J.P. Morgan’s Quorum team has built out a robust framework and made the project open source so that other developers can utilize and iterate upon the tool. Signature Bank, a smaller New York bank, has already been using Quorum for a similar live payment platform called Signet. J.P. Morgan is open to others utilizing their platform and they will benefit from the increased testing and development as a result. Additionally, the J.P. Morgan blockchain team has developed “Cakeshop”, an integrated development environment (IDE) and software development kit (SDK) for “Ethereum-like ledgers” (i.e. Quorum).

As mentioned previously, Umar Farooq believes that this is just the starting point and that “The applications are frankly quite endless”. While JPM Coin and Quorum would need significant development to be utilized for “payments on inter-connected devices” the sentiment is significant. It shows that the J.P. Morgan team is looking into the future and potential applications for JPM Coin in markets which don’t yet exist.

The J.P. Morgan GitHub has a repo titled “zsl-q” which holds the POC for a “zero-knowledge security layer” (ZSL) implementation on Quorum. In the documentation, they admit that Quorum’s “key limitation is that it does not support prevention of double-spending for digital assets that are exchanged within private contracts”. This is a serious admission and one which the Quorum team must be keen on rectifying. ZSL is a protocol designed by the team that engineered Zcash, leveraging zk-SNARKS to “enable the transfer of digital assets on a distributed ledger, without revealing any information about the Sender, Recipient, or the quantity of assets that are being transferred.” The POC with Quorum will allow prevention of double-spending of digital assets exchanged privately. It is interesting to note that they worked with the Zcash team, emphasizing the wide breadth of experts invested in the Quorum project, including the Ethereum Enterprise Alliance.

Another interesting insight from J.P. Morgan’s GitHub is the development of Quorum for cloud services with repo’s titled “quorum-aws” and “quorum-cloud”. Such cloud tools could enable Quorum to rapidly scale to more customers and entice a wide variety of institutions to join the JPM Coin ecosystem.

A Vote of Confidence

There has been a divide in the distributed technologies community between those who support cryptocurrencies and those who support blockchain or distributed ledger technologies (DLT). In the past year, those who support blockchain have felt that the efficacy of DLT has been shrouded by the rise and pop of the cryptocurrency market. And while the success of public blockchains depends on cryptocurrencies as a means to create a game-theory model of security, this isn’t the case for permissioned (private) blockchains.

Jamie Dimon, the CEO of Morgan Stanely, likely falls in the latter camp which supports blockchain technology and not crypto. He has been highly vocal about his distrust in cryptocurrencies but has asserted that blockchain technology holds promise, and his recent moves reflect this. Additionally, J.P. Morgan’s blockchain project, Quorum, is built on top of Ethereum, reusing the vast majority of Ethereum’s architecture in its design. This is an incredible vote of confidence in the Ethereum blockchain, both for its native public form and private iterations.

Blockchain technology, cryptocurrencies, and digital assets are new to the world. Those yet to be initiated need time to test the waters, understand the technologies and the opportunities, and then adjust. Most often, progress occurs through iterations and not leaps and bounds. And no matter how frustrating the slow march of progress may be to some, attacking small incremental progress because we prefer grand leaps forward is a counterproductive measure.

The development and implementation of JPM Coin provide valuable, worldwide exposure for Ethereum and blockchain technologies. Furthermore, JPM Coin is of no detriment to Ethereum or other public blockchain initiatives. If you aren’t convinced by the content above, reread this passage from the Ethereum Yellow Paper:

There are many goals of this project; one key goal is to facilitate transactions be- tween consenting individuals who would otherwise have no means to trust one another. This may be due to geographical separation, interfacing difficulty, or perhaps the incompatibility, incompetence, unwillingness, expense, uncertainty, inconvenience or corruption of existing legal systems. By specifying a state-change system through a rich and unambiguous language, and furthermore archi- tecting a system such that we can reasonably expect that an agreement will be thus enforced autonomously, we can provide a means to this end.

A lot of work is still to be done to achieve the above goal, and JPM Coin will prove to be a valuable stepping stone en route to a new world built upon public distributed ledger technologies.

Disclaimer: This article is not intended to provide investment advice. Any investments should only be cautiously taken after the associated risk is understood.

Have thoughts, opinions, or are simply excited about the distributed future??? Leave your comments below!

Breaking Down JPM Coin was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Future of AI-Assisted Social Media Marketing

Who would’ve thought — back in 1950, when Alan Turing wrote a paper on “Computer Machinery and Intelligence” — we’d today be navigating the complexity of this revolutionary invention, termed Artificial Intelligence? Did Turning think the computation of…

Explaining p-values with puppies

You’ll find p-values lurking all over data science (and all the rest of science, for that matter). If you took STAT101, the explanation you probably heard runs something like this: A p-value is the probability of observing a statistic at least as extre…

Blockchain Usability Checklist

How have blockchains become more usable starting from the protocol layer and surrounding infrastructure.Figure: Usability ChecklistTo help censorship resistant, immutable, and permissionless blockchains gain wide spread market adoption, a lot of change…

12 JavaScript Concepts That Will Level Up Your Development Skills

12 Concepts That Will Level Up Your JavaScript Skills

JavaScript is a complex language. If you’re a JavaScript developer at any level, it’s important to understand its foundational concepts. This article tackles 12 concepts that are critical for any JS developer to understand, but in no way represents the full breadth of what a JS developer should know.

Note: If you enjoy this article, please give it a clap 👏 to help spread the word!

I will be continuously updating this list in a Github repository called JS Tips & Tidbits. Please star ⭐ and share if you want to follow along!

1. Value vs. Reference Variable Assignment

Understanding how JavaScript assigns to variables is foundational to writing bug-free JavaScript. If you don’t understand this, you could easily write code that unintentionally changes values.

JavaScript always assigns variables by value. But this part is very important: when the assigned value is one of JavaScript’s five primitive type (i.e., Boolean, null, undefined, String, and Number) the actual value is assigned. However, when the assigned value is an Array, Function, or Object a reference to the object in memory is assigned.

Example time! In the following snippet, var2 is set as equal to var1. Since var1 is a primitive type (String), var2 is set as equal to var1’s String value and can be thought of as completely distinct from var1 at this point. Accordingly, reassigning var2 has not effect on var1.

let var1 = 'My string';
let var2 = var1;
var2 = 'My new string';
// 'My string'
// 'My new string'

Let’s compare this with object assignment.

let var1 = { name: 'Jim' }
let var2 = var1;
var2.name = 'John';
// { name: 'John' }
// { name: 'John' }

One might see how this could cause problems if you expected behavior like primitive assignment! This can get especially ugly if you create a function that unintentionally mutates an object.

2. Closures

Closure is an important JavaScript pattern to give private access to a variable. In this example, createGreeter returns an anonymous function that has access to the supplied greeting, “Hello.” For all future uses, sayHello will have access to this greeting!

function createGreeter(greeting) {
return function(name) {
console.log(greeting + ', ' + name);
const sayHello = createGreeter('Hello');
// Hello, Joe

In a more real-world scenario, you could envision an initial function apiConnect(apiKey) that returns some methods that would use the API key. In this case, the apiKey would just need to be provided once and never again.

function apiConnect(apiKey) {
function get(route) {
return fetch(`${route}?key=${apiKey}`);
  function post(route, params) {
return fetch(route, {
method: 'POST',
body: JSON.stringify(params),
headers: {
'Authorization': `Bearer ${apiKey}`
  return { get, post }
const api = apiConnect('my-secret-key');
// No need to include the apiKey anymore
api.post('http://www.example.com/post-endpoint', { name: 'Joe' });

3. Destructuring

Don’t be thrown off by JavaScript parameter destructuring! It’s a common way to cleanly extract properties from objects.

const obj = {
name: 'Joe',
food: 'cake'
const { name, food } = obj;
console.log(name, food);
// 'Joe' 'cake'

If you want to extract properties under a different name, you can specify them using the following format.

const obj = {
name: 'Joe',
food: 'cake'
const { name: myName, food: myFood } = obj;
console.log(myName, myFood);
// 'Joe' 'cake'

In the following example, destructuring is used to cleanly pass the person object to the introduce function. In other words, destructuring can be (and often is) used directly for extracting parameters passed to a function. If you’re familiar with React, you probably have seen this before!

const person = {
name: 'Eddie',
age: 24
function introduce({ name, age }) {
console.log(`I'm ${name} and I'm ${age} years old!`);
// "I'm Eddie and I'm 24 years old!"

4. Spread Syntax

A JavaScript concept that can throw people off but is relatively simple is the spread operator! In the following case, Math.max can’t be applied to the arr array because it doesn’t take an array as an argument, it takes the individual elements as arguments. The spread operator … is used to pull the individual elements out the array.

const arr = [4, 6, -1, 3, 10, 4];
const max = Math.max(...arr);
// 10

5. Rest Syntax

Let’s talk about JavaScript rest syntax. You can use it to put any number of arguments passed to a function into an array!

function myFunc(...args) {
console.log(args[0] + args[1]);
myFunc(1, 2, 3, 4);
// 3

6. Array Methods

JavaScript array methods can often provide you incredible, elegant ways to perform the data transformation you need. As a contributor to StackOverflow, I frequently see questions regarding how to manipulate an array of objects in one way or another. This tends to be the perfect use case for array methods.

I will cover a number of different array methods here, organized by similar methods that sometimes get conflated. This list is in no way comprehensive: I encourage you to review and practice all of them discussed on MDN (my favorite JavaScript reference).

map, filter, reduce
There is some confusion around the JavaScript array methods map, filter, reduce. These are helpful methods for transforming an array or returning an aggregate value.

  • map: return array where each element is transformed as specified by the function
const arr = [1, 2, 3, 4, 5, 6];
const mapped = arr.map(el => el + 20);
// [21, 22, 23, 24, 25, 26]
  • filter: return array of elements where the function returns true
const arr = [1, 2, 3, 4, 5, 6];
const filtered = arr.filter(el => el === 2 || el === 4);
// [2, 4]
  • reduce: accumulate values as specified in function
const arr = [1, 2, 3, 4, 5, 6];
const reduced = arr.reduce((total, current) => total + current);
// 21

find, findIndex, indexOf
The array methods find, findIndex, and indexOf can often be conflated. Use them as follows.

  • find: return the first instance that matches the specified criteria. Does not progress to find any other matching instances.
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const found = arr.find(el => el > 5);
// 6

Again, note that while everything after 5 meets the criteria, only the first matching element is returned. This is actually super helpful in situations where you would normally break a for loop when you find a match!

  • findIndex: This works almost identically to find, but rather than returning the first matching element it returns the index of the first matching element. Take the following example, which uses names instead of numbers for clarity.
const arr = ['Nick', 'Frank', 'Joe', 'Frank'];
const foundIndex = arr.findIndex(el => el === 'Frank');
// 1
  • indexOf: Works almost identically to findIndex, but instead of taking a function as an argument it takes a simple value. You can use this when you have simpler logic and don’t need to use a function to check whether there is a match.
const arr = ['Nick', 'Frank', 'Joe', 'Frank'];
const foundIndex = arr.indexOf('Frank');
// 1

push, pop, shift, unshift
There are a lot of great array method to help add or remove elements from arrays in a targeted fashion.

  • push: This is a relatively simple method that adds an item to the end of an array. It modifies the array in-place and the function itself returns the item added to the array.
let arr = [1, 2, 3, 4];
const pushed = arr.push(5);
// [1, 2, 3, 4, 5]
// 5
  • pop: This removes the last item from an array. Again, it modifies the array in place. The function itself returns the item removed from the array.
let arr = [1, 2, 3, 4];
const popped = arr.pop();
// [1, 2, 3]
// 4
  • shift: This removes the first item from an array. Again, it modifies the array in place. The function itself returns the item removed from the array.
let arr = [1, 2, 3, 4];
const shifted = arr.shift();
// [2, 3, 4]
// 1
  • unshift: This adds one or more elements to the beginning of an array. Again, it modifies the array in place. Unlike a lot of the other methods, the function itself returns the new length of the array.
let arr = [1, 2, 3, 4];
const unshifted = arr.unshift(5, 6, 7);
// [5, 6, 7, 1, 2, 3, 4]
// 7

splice, slice
These methods either modify or return subsets of arrays.

  • splice: Change the contents of an array by removing or replacing existing elements and/or adding new elements. This method modifies the array in place.

The following code sample can be read as: at position 1 of the array, remove 0 elements and insert b.

let arr = ['a', 'c', 'd', 'e'];
arr.splice(1, 0, 'b')
  • slice: returns a shallow copy of an array from a specified start position and before a specified end position. If no end position is specified, the rest of the array is returned. Importantly, this method does not modify the array in place but rather returns the desired subset.
let arr = ['a', 'b', 'c', 'd', 'e'];
const sliced = arr.slice(2, 4);
// ['b', 'c']
// ['a', 'b', 'c', 'd', 'e']


  • sort: sorts an array based on the provided function which takes a first element and second element argument. Modifies the array in place. If the function returns negative or 0, the order remains unchanged. If positive, the element order is switched.
let arr = [1, 7, 3, -1, 5, 7, 2];
const sorter = (firstEl, secondEl) => firstEl - secondEl;
// [-1, 1, 2, 3, 5, 7, 7]

Phew, did you catch all of that? Neither did I. In fact, I had to reference the MDN docs a lot while writing this — and that’s okay! Just knowing what kind of methods are out there with get you 95% of the way there.

7. Generators

Don’t fear the *. The generator function specifies what value is yielded next time next() is called. Can either have a finite number of yields, after which next() returns an undefined value, or an infinite number of values using a loop.

function* greeter() {
yield 'Hi';
yield 'How are you?';
yield 'Bye';
const greet = greeter();
// 'Hi'
// 'How are you?'
// 'Bye'
// undefined

And using a generator for infinite values:

function* idCreator() {
let i = 0;
while (true)
yield i++;
const ids = idCreator();
// 0
// 1
// 2
// etc...

8. Identity Operator (===) vs. Equality Operator (==)

Be sure to know the difference between the identify operator (===) and equality operator (==) in JavaScript! The ==operator will do type conversion prior to comparing values whereas the === operator will not do any type conversion before comparing.

console.log(0 == '0');
// true
console.log(0 === '0');
// false

9. Object Comparison

A mistake I see JavaScript newcomers make is directly comparing objects. Variables are pointing to references to the objects in memory, not the objects themselves! One method to actually compare them is converting the objects to JSON strings. This has a drawback though: object property order is not guaranteed! A safer way to compare objects is to pull in a library that specializes in deep object comparison (e.g., lodash’s isEqual).

The following objects appear equal but they are in fact pointing to different references.

const joe1 = { name: 'Joe' };
const joe2 = { name: 'Joe' };
console.log(joe1 === joe2);
// false

Conversely, the following evaluates as true because one object is set equal to the other object and are therefore pointing to the same reference (there is only one object in memory).

const joe1 = { name: 'Joe' };
const joe2 = joe1;
console.log(joe1 === joe2);
// true

Make sure to review the Value vs. Reference section above to fully understand the ramifications of setting a variable equal to another variable that’s pointing to a reference to an object in memory!

10. Callback Functions

Far too many people are intimidated by JavaScript callback functions! They are simple, take this example. The console.log function is being passed as a callback to myFunc. It gets executed when setTimeout completes. That’s all there is to it!

function myFunc(text, callback) {
setTimeout(function() {
}, 2000);
myFunc('Hello world!', console.log);
// 'Hello world!'

11. Promises

Once you understand JavaScript callbacks you’ll soon find yourself in nested “callback hell.” This is where Promises help! Wrap your async logic in a Promise and resolve on success or reject on fail. Use “then” to handle success and catch to handle failure.

const myPromise = new Promise(function(res, rej) {
if (Math.random() < 0.9) {
return res('Hooray!');
return rej('Oh no!');
}, 1000);
.then(function(data) {
console.log('Success: ' + data);
.catch(function(err) {
console.log('Error: ' + err);

// If Math.random() returns less than 0.9 the following is logged:
// "Success: Hooray!"
// If Math.random() returns 0.9 or greater the following is logged:
// "Error: On no!"

12. Async Await

Once you get the hang of JavaScript promises, you might like async await, which is just “syntactic sugar” on top of promises. In the following example we create an async function and within that we await the greeter promise.

const greeter = new Promise((res, rej) => {
setTimeout(() => res('Hello world!'), 2000);
async function myFunc() {
const greeting = await greeter;
// 'Hello world!'


If you didn’t know any of these 12 concepts, you likely have grown at least a little in your knowledge of JavaScript! And if you knew them all, then hopefully this was a chance to practice and grow your knowledge. What other concepts do you think are critical? Let me know in the comments.

12 JavaScript Concepts That Will Level Up Your Development Skills was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Startup Idea: Make People More Ambitious

You ask, Why? I say, Why not? It’s the highest return activity in the world.

A few months ago, Tyler Cowen wrote a blog titled “The high-return activity of raising others’ aspirations”,

An excerpt from that;

“At critical moments in time, you can raise the aspirations of other people significantly, especially when they are relatively young, simply by suggesting they do something better or more ambitious than what they might have in mind. It costs you relatively little to do this, but the benefit to them, and to the broader world, may be enormous.

This is in fact one of the most valuable things you can do with your time and with your life.”

I have come to think of this thing as a business model!

Look at how Lambda School,(A school where you pay nothing until you’re earning $50k or more. And if you don’t, it’s free) founded by Austen Allred, does it, it enables its students to think big, helps them increase their salary by a huge margin.

Then: making minimum wage. Now: highly paid software engineer. No experience, no degree, no upfront cost, just sheer hard work and a better life. Congratulations, Andrew!

 — @Austen

Just look at these stories and it makes you believe you can make a good business out of it!

The second startup which does this amazingly well is Pioneer,

Emails like these make work flat-out fun:

 — @danielgross

I am not sure what Pioneer’s business model is, but whatever it is, I fundamentally believe that a startup like Pioneer founded by Daniel Gross deserves to exist in a world ensuring a fair playing ground for all the talented individuals.

Similarly, what else is it that you can do that helps people aim higher in life?

Let me know.

Honestly put, this is something which I want to do at some point of time in my life. (And I ask myself, why not now?)

Also, Subscribe to my newsletter here.

Ankit Kumar Newsletter

PS: Follow me on Twitter

Startup Idea: Make People More Ambitious was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Bitcoin: The Living Room Revolution

Bitcoin and crypto are a massive crowd-sourced project to increase individual sovereignty over money and information, which, if successful, will transform the world we live in, forever.

“The tree of liberty must be refreshed from time to time with the electricity and computing power of patriots and revolutionaries.” — Thomas Jefferson (if he were alive today)

It’s about Liberty
Most of the world’s money is fiat money. Fiat is a dirty word. Its usage is rooted in phrases like “by Royal fiat” as in, by Royal command.

While we may think that we tossed off the shackles of ‘rule by fiat’ from monarchies and dictatorships, we still comfortably refer to our money as “fiat”. It is, after all, printed on-demand by a handful of people whenever they wish. This point shouldn’t be taken lightly because the power of monetary policy is the power to destroy the world. Time and time again societies have been brought to their knees because of fiat money. We are all still living under rule by fiat.

10 years ago, Bitcoin launched a practical and non-violent revolution that began in living rooms with the goal of liberty from rule by fiat. Bitcoin forms, as the US Constitution says, “a more perfect union” because it eliminates a closed middleman (rulers) that can arbitrarily control our world and replaces it with math and computation that is open. It replaces a system that can be abused at anytime by those in power, with a system that has rules that change slowly or never and is open to everyone to participate in its existence.

But for this to work, Bitcoin demands that we work together. Massive distributed computing is needed to power and validate transactions and secure the Bitcoin network. By definition, this computing power needs to be decentralized and distributed, to ensure that no one party (or cartel) has enough of the computing power to take over.

We call this aspect of Bitcoin and crypto — its true purpose — “Liberty Crypto”. Liberty is not freedom from each other, Liberty is freedom from a central authority. In fact, Liberty thrives when we work directly with each other and that’s how Bitcoin and crypto thrive. And by working directly together we can be free of fiat rule. That is the Living Room Revolution.

Lightning & the Living Room Revolution
Part of what makes Bitcoin so different than the existing financial system is that it is powered by a permission-less network of decentralized actors who voluntarily contribute to make the network work. At the same time, however, for most of the last decade, the tools to participate in providing Bitcoin and other crypto networks with security and liquidity were limited to those who could afford and manage complicated hardware and software setup.

But, in 2019, we’re witnessing the emergence of a new generation of plug-and-play products that dramatically reduce those barriers to entry. For our part, we’ve strived to build Coinmine so it’s easy enough for anyone to help power a revolution of money and information right from their living room. Our mission is to make gaining liberty over our information and money as easy as plugging in a sleek device and tapping an app.

The early internet protocols HTTP, SMTP, FTP weren’t easy to use at first. They were run from commands lines by devs only. Today, billions of us use them by tapping in apps Crypto protocols are going through the same evolution Soon, crypto and blockchain will be a tap away

 — @farbood

One of the best expressions of this new era of liberty is the rise of the Lightning Network.

Lightning is a layer 2 technology built on top of Bitcoin, and designed to enable people to use their BTC to make…well, lightning fast p2p transactions with almost zero transaction costs. The real promise of a digital world where a $5 coffee doesn’t come with a 10% credit card processing fee.

Lightning, again, needs us to work together. It is powered by a distributed network of payment channels. But to reach its goal, Lightning is going to need a lot of channels. The Lightning Network facilitates Bitcoin payments by routing them through those networks of payment channels. In other words, it replaces older, centralized payment rails with a network of people running nodes and maintaining channels. It is people — individuals like you and me — who relay transactions and provide liquidity to the network. The more nodes & channels there are, the more liquidity in the network, the cheaper the transactions, but most importantly, the greater the liberty as the network becomes harder and harder to control by one party. We are building a new financial and monetary system, together. That is Liberty.

At the beginning of 2018, there were only a handful of nodes, running a small number of channels. Today, just over a year later, there are some 6000+ nodes, running over 27,000 channels with 698 million Satoshis (698 BTC) in capacity (in fiat that’s a few million USD). This is happening despite teams behind some of the important implementations of Lightning, like Lightning Labs, cautioning that the technology is still very nascent and people shouldn’t put very much in their channels.

The Lightning Network is the Living Room Revolution playing out before our eyes, and from our homes.

The ROI Of The Living Room Revolution
Much of this piece is about the ethos of powering crypto from home, and why participating in activities like running a Lightning node is not just about financial return, but about having a hand in ushering in a new world of sovereign liberty over money and information.

Part of the genius of Bitcoin and other crypto economic systems, however, is that their economic incentives are designed such that a party doesn’t have to even care about this ethos to be rewarded for participating in the network.

In short, if an asset like Bitcoin, with its supply capped at 21 million, reaches escape velocity and becomes adopted as digital gold or an even more robust digital money system, early miners, holders and Lightning routers, will be well rewarded. Indeed, many would argue that even the last person to adopt Bitcoin will be rewarded, relative to the fiat system which diminishes the value of their currency as it inflates. For everyone who adopts before that, however, the benefit is even greater.

The practical effect of this system design is that the advocates of the Living Room Revolution — those believers motivated by more than money who power crypto by doing things like running Lightning nodes and mining — have as an ally the power of economic self-interest. The revolutionaries are required to make sure that the network stays decentralized and doesn’t re-consolidate around a small group of entities, capital and hash power. The massive financial opportunity inherent in Bitcoin and other crypto assets creates a positive economic feedback loop that can help those assets reach escape velocity.

The first wonder of the digital world.
We believe that this Living Room Revolution, if successful, will be the first wonder of a new world.

But success is far from guaranteed. Today, far too many of the conversations about mining, staking and running nodes revolves around short-term economic ROI. While this is not financial advice, our modeling shows that the economic ROI for being a part of the Living Room Revolution is more likely in the long term than short term outlook.

Despite the potential financial rewards, the decision to power crypto from home is much more than financial. Fundamentally, it is about building a new world that is outside the power of domineering institutions and which instead enshrines the liberty of sovereign individuals voluntarily and collectively working together for something better.

It is Liberty over fiat.

It is the Living Room Revolution.

Bitcoin: The Living Room Revolution was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Security in your OpenAPI Specification

Learn how OpenAPI 2.0 and 3.0 can promote and emphasis API Security as a first-class concern during the design phase of API development.It is no longer a mystery, APIs are eating the world. There are a lot of companies today that are offering their API…