Imagine that you have just designed a water faucet/ tap which dispenses water only if your fingerprint matches (however strange this may sound, I believe creativity should know no bounds ;). Now, for this to be actually used in a real wash-room or a kitchen sink, it needs to have a receptacle that exactly fits the common water pipe end-point. Lets say, it needs to have the female threaded end 1/2 inch diameter pipe to couple neatly into the male threaded pipe end. In other words, this new contraption of yours needs to follow the same plumbing interface definition as the pipes it needs to work with.
…To Application Programming Interface
Now, imagine if you were more of a software dude/ dudette than a pipe-hardware one. You might design a piece of software that will allow money from your bank to flow out only if your fingerprint matches (this sounds a lot more familiar, right? :). For this widgety creation of yours to be actually used in a real banking app or website, it needs to be able to be ‘integrated’ into the money flow interface definition of the bank(s). As a developer of this new functionality, you therefore need to follow what is called as the Application Programming Interface (API) definition exposed by the banking entity that you intend your software to work with. There you are! You have been hereby introduced to APIs. Simple right?
The advantage of defining an API is this. Once defined, and published, it opens your platform up to a limitless set of different applications by independent innovators. So you might see some developer making an app that sends money from a bank account every time someone likes her profile pic on Facebook, or another who develops an app that makes a donation to a random NGO every time you use a swear-word on twitter…I hope you get the drift. There is no limit to the variety of innovation this can spawn. And all this while you enjoy a nice cold iced lemon tea reading some news-feed on your tab.
In short, APIs decouple application use cases, innovation, revenue generation and growth from your core platform. The better defined your APIs and the partner on-boarding processes are, the more you can relax and count the beans 🙂
API is also important because it helps make as much sense of the options available, to the humans implementing it as it does to the machines consuming or exposing it. This is an act of fine balance.
An API once said, “I need some REST”
Lets now focus on something a bit more technical. RESTful APIs. For a start, Restful does not refer to the relaxation that I’d mentioned a short while ago. The REST in RESTful refers to REpresentational State Transfers.
A lot of us have come from the functional programming world. In simple terms, the interfaces defined there would correspond to the verbs that were being exposed. For instance, in a banking app, you could have an interface defined as getBalance(account) which would return the account balance for a given account. Or, sendMoney(account1, account2). Or, listAccounts(customer). These functions could also be exposed as APIs and they do something based on its input parameters. The response of the API call would be the action defined as the verb and output parameters that would provide more details on the action done.
This kind of API has one inherent problem. An API description could be really arbitrary and it would be difficult to imagine all the functionality exposed without having explicit access to the API creator’s documentation.
RESTful APIs are a different breed altogether. They focus a lot more on the resources or nouns, instead of verbs. For example, ‘customer’ could be a resource, ‘account’ could be another and so on. As for the actual action that you would want to take with a resource, REST simplifies it all down to a set of pre-defined verbs in the WWW HTTP definition.
When you visit google.com, for example; the browser actually executes a GET request for the index page based on the URL (Uniform Resource Locator) http://www.google.com.
The idea (RESTful lends its origin to the doctoral thesis of a genius named Roy Fielding) is that, given these constant verbs; All that a developer needs to know is the set of objects/ nouns that he might have to deal with.
For instance, assume a simplified banking application. It may have a resource called customer or transaction or account. So, in the simplest sense, assume the base URL is https://myxyzbank. Now the base URL is like the root directory for all resources (nouns). So, accessing a customer within the bank would likely be baseURL/customer.
There are a few more properties of RESTful APIs. I would only want to touch upon the fact that these API calls are also stateless. That is, a RESTful API call is in itself complete and independent of previous or future calls. In other words, the API calling entity’s state is not preserved on the server in between calls.
Singular and plural
Also noteworthy is the singular and plural use of these nouns.
While /customer/id refers to a particular customer with a given ‘id’ as its identity;
/customers/ refers to all the customers collectively.
So, executing an HTTP GET request on https://myxyzbank/customers should likely return a list of all customers within the bank, executing GET request on https://myxyzbank/customer/id would return a particular customer only. So, simple plain English “get a list of all customers in this bank” translates to an API, GET on https://myxyzbank/customers/ and the bank should spew out its long list (Of course it ain’t that simple. Security, roles, access and privileges have been excluded from the scope of this article).
Now back to the 4 main primitives and why they are pretty much sufficient for most applications.
Get, Post, Put and Delete is all you need
GET- as the name suggests would simply fetch the resource(s) identified POST- use it when you intend to create a new resource(s) on the server PUT- updates an existing resource on the server DELETE- deletes the identified resource(s)
This sounds a lot like CRUD framework (Create, Read, Update, Delete) used for data storage.
Additional parameters could also be passed to and from the server along with each resource request. This data could be in different formats, the most popular and elegant one around is called JSON (Java Script Object Notation). Another excessively verbose and elaborate protocol from our good ‘ol days is XML (Extensible Markup Language).
Beauty and the beast
There is a certain sense of beauty, logic, crispness and cleanness in the structure of truly RESTful API. The sad part is that most implement it with varying degrees of RESTful-ness. To the extent that I have even seen a bank call GET /getCustomerBalance as a RESTful API! That’s a verb on verb action and totally looses the plot.
My attempt was to only provide a plain and simple outline to API and RESTful API. There is a lot more in it and the world wide web should be your best guide.
May the POST be with you! And May is hot!
Restful Web APIs https://www.amazon.in/dp/9351102971/ref=cm_sw_r_cp_apa_i_2A0fzbJN3CK5T
Tinkerbees come with their own set of APIs that help you integrate them with your app/ business logic seamlessly and in a snap.
Once upon a time there was the simple Internet and it was designed for people to exchange content with each other. Since it was implied that this Internet of People (IoP) was for obviously for people, it was simply known as the Internet.
Now the way people communicate is way different from the way things do. We, the people, for instance, love to do video chats, post our selfies, watch hours of cat memes and endless streams of text messages interspersed with smileys and weird acronyms. It would be surprising to know how much of this stuff moves around in a single internet minute! (Well, mostly not so useful stuff). This is akin to time itself being compressed and decompressed at a colossal scale.
But Things communicate pretty crisply and in a business-like fashion. A smart thermometer may simply wake up and say “33°C” and go to sleep! They (usually) have no business chatting about the cute cat next door and so they simply don’t. A smart bulb may only need to hear “1” and it would turn itself on or vice-versa. The attempt here was not to outline exactly what language they communicated in or its semantics or its layers, but to generally give you an idea that their messages are usually infrequent, crisp and involve no crap.
There are of course exceptions. A smart video camera may need to transmit a live video feed, which is neither infrequent nor crisp; but most other sensors and actuators may very well tolerate low bandwidth and latency.
If you had read the first link in this article, you would realize that in many applications, it is necessary to have battery powered devices that last for a very long time (many years!). Also, in many applications, it is desirable to have battery powered devices that transmit/ receive wirelessly across long distances (many miles!).
Therefore, LP-WAN; which simply stands for Low Power, Wide Area Network. This does exactly what it says. It is a generic term used to define technology blocks that allow low powered end-nodes/ devices to communicate with each other or to a set of central servers, over a large geographic area (aka Wide Area Network or WAN)
LP-WAN stacks are usually narrow band, support low bit-rates and may restrict throughput.
We will now take a few small detours to understand a few concepts better. You could skip them if you so wish.
a. What is narrow-band?
How does bandwidth figure into all this? Lets explore this aspect a bit. Any electromagnetic signal could essentially be approximated to be/ mathematically represented as an infinite set of sine waves. These waves will have different frequencies (f1, f2, f3… fn) and different amplitudes. There would be a few dominant waves (say fd1, fd2, fd3… fdn) and a few non-dominant ones (say fnd1, fnd2, fnd3…fndn). If that signal is being transmitted from one end of a medium, for it to be reproduced with reasonable fidelity at the other end, we would need most of the dominant and as many of the non-dominant frequency waves to reach the other end. The bandwidth available for the medium is the difference between the minimum frequency and the maximum frequency signals (fmax-fmin) that could be propagated through it without being significantly attenuated (dying out). The bandwidth is also dependent on the frequency band because different media have different conduction properties for different frequency ranges.
A narrow band therefore implies a medium (or an artificial constraint placed on the medium) where the difference between the highest and lowest frequency of signals that could be propagated through it is very small, say a few 100 kilohertz.
b. What is modulation?
In simple terms, it is the method of wrapping one signal up within a different signal envelop in such a way that it could fulfill a signal condition that is more suitable for the medium being used and also such that it would be possible to deconstruct this at the receiving and and retrieve the original signal (demodulation).
One such modulation technique is amplitude modulation where a lower frequency signal is essentially used as a cookie cutter on a higher frequency (carrier) wave. amplitude(signalOut) = fn(amplitude(signalIn)) frequency(signalOut) = Kf. Where Kf >>f(signalIn)
Another modulation technique is frequency modulation where the amplitude of the lower frequency signal is converted into a corresponding signal frequency in a higher frequency range. frequency(signalOut) = fn(amplitude(signalIn)) amplitude(signalOut) = Ka
Modulation therefore helps one ensure that the signals generated by our sensors could be packed into the band and bandwidth constraints imposed by the air interface regulations.
c. Who owns the air interface?
The electromagnetic spectrum (or rather the radio and microwave part of it) is considered a sovereign subject. That is, something owned/ regulated very strictly by the state. This is because the available bandwidth is limited and there are multiple applications available and entities vying for a favorable band.
Simply put, two signals within the same bandwidth will have elements (constituent signal waves at particular frequencies) that will interfere (add up or cancel) with each other. Such an interference will result in a significant attenuation or amplification of constituent signals resulting in corruption of the overall signal being transmitted. Our governments therefore decided that it is better for them to own the bandwidth and only selectively permit chunks of it for different uses and different users. A good portion of the spectrum is reserved for the defense forces. Some for television signals. Some for mobile telephony and so on.
Also, the atmospheric and geographic constitution of the wave-path favor certain frequency ranges over others for the distances involved. Chunks of spectrum, being a scarce resource, are usually auctioned out to the highest bidder. Especially for mobile telephony (remember the 2G spectrum case?)
d. ISM band?
There have been numerous applications of wireless/ radio technology in transmission of sensor signals, especially over short distances. Thankfully, certain frequency bands have been kept out of strict government licensing norms as long as they follow certain protocols. These are called the Industrial and Scientific Measurement, ISM bands. There are some LPWAN technologies that work in the ISM band and there are some which work in the commercial licensed bands.
While the ISM band is pretty restrictive, the fact that it is unlicensed means that one could easily use it based on readily available tools and platforms and that generally, it would cost less to send a packet over an ISM band than its counterpart (because there, every packet essentially bears some tiny fraction of the licensing cost). On the flip side the licensed bands are usually better placed for more bandwidth or throughput intensive applications.
WAN is the good ‘ol Wide Area Network. So this is a term used to represent a network that is bigger than a typical LAN (Local Area Network) as found in a typical office space, spanning multiple square kilometers in geographic area and usually plumbed through a set of gateways (that convert a packet received over the incoming protocols like LP-WAN into standard internet packets).
LoRaWAN simply stands for Long Range Wide Area Network. It is an LP-WAN technology that was developed on top of an innovative physical layer radio solution by a company named Semtech Corporation. It is actually a protocol/ a network stack that operates on top of LoRa as a (physical layer) modulation protocol. (So, yes LoRa modulation could also be used independently for long range point to point communication without a WAN protocol on top of it).
LoRaWAN functions within the ISM bands and therefore the operating band varies depending on the geographic region the solution is intended to operate in. Check them out here. What LoRa provides is really great range for tiny battery powered nodes. We are talking here about 10 Km line of sight range from something similar to a pair of AA cells that will last across their nominal shelf life (a couple of years at max). This is because of the way the LoRa chip pretty much ‘sleeps’ all the time at low power (think micro ampere) and wakes up only to transmit and then execute a synchronized receive at a few milli amperes of current for a few milli seconds.
Its resiliency is also because of its unique modulation technique that enables it to work even at signal to noise ratio levels of the order of -130dbm! Its modulation technique is called Chirp Modulation. Here’s how chirp modulation works. For a digital signal, we are essentially sending a pattern of binary signals (0 or 1). Imagine for a 1, we send a radio signal that increases from f0 to f1 and for a 0 we send a wireless signal that decreases from f3 to f2.
Play around with these online audio chirp generators and then imagine the same for radio 🙂 Then imagine that happening real fast and in a sequence (upward chirp or a downward chirp) according to the sequence of bits being transmitted.
Now a chirp modulation is a fundamentally more robust because it is an application of what is called spread spectrum modulation. By spreading a single state (0 or 1) across a linear range of frequencies, we have made the signal a lot more resilient to noise because at the receiving end, it is easier to detect a general sweep occurring compared to detecting just the occurrence of an absolute frequency.
Now, wider the chirp spread (f1-f0), the wider is the bandwidth it needs (while still being narrow-band) and better is its ability to be detected in noisy environments or larger distances. This is where LoRaWAN has another neat trick up its sleeve, which is that it can smartly adapt the chirp spread depending on what it deems to be appropriate based on its previous signal communication history with a particular node.
In addition, depending on the bandwidth available in each region, the protocol employs multiple channels (each channel corresponding to chirp frequency band) with channel hopping as well to support multiple devices communicating at the same time.
However, it must be said that LoRaWAN is generally a non-ack based protocol. That is, unless explicitly designed to do otherwise, its nodes just transmit a packet (uplink) and hope that it reaches atleast one gateway in its vicinity. There is usually a packet counter implemented on both the node and the gateway and/ or the application that could be used to figure out if any packets were dropped.
And more interestingly, the packets are encrypted using multiple keys and AES. This enables secure communications. So even a gateway in its path cannot see the packet data being transmitted as long as it does not have the network key and the application key.
You could use the coverage map there to first check if there is already any community powered gateway in your city/ vicinity. If not, you could buy a reasonably cheap gateway, configure it and add it to the open network (highly recommended!). Once you have a gateway, you could simply add your own nodes. Here again, you could either design your own nodes (based on Semtech chips/ SoCs) or use readily available nodes which you just have to add your sensors, your code and credentials to configure and get started.
In India, there are a few private players as well who are providing a LoRaWAN network (Tata Communications and SenRa). At Tinkerbee Innovations we have tested these integrations and have our custom designed tiny LoRaWAN boards that can be configured for a variety of end-use applications.
“Ok, but whats NB-IoT?”
If you recall, we mentioned that some part of the radio spectrum was licensed by telecom operators? Mostly for voice, SMS and internet and mostly for people communications? True, there also existed a category of applications called M2M, or machine to machine for certain machines to send data over GSM/ GPRS networks through a SIM embedded on the end-node. Well, for one, this network was not kinda optimized for the small packet sensor data or for power efficiency. Most of the sensor nodes therefore in the earlier gen required constant power supply or frequent charging (every few hours/ days), but thankfully, due to the relatively wide coverage of the telecom networks, it gave the sensors more ubiquity.
Sensing an opportunity in the IoT aka Things communication space, the 3G telecom providers (3GPP) embarked on a brand new architecture that rode over existing LTE infra, but was designed to use a narrower bandwidth and optimized for extremely low power usage by end nodes. This was christened as NB-IoT or Narrow band IoT. with NB-IoT, the end-nodes could match LoRaWAN in terms of battery life but still would not match the LoRa performance or range (which is kinda offset by the already ubiquitous presence of cellular towers) or its price point. The per node one-time cost and the recurring subscription costs are now on the higher side. However, being backed up by relatively strong and established telecom operators, it may be a matter of time and scale when it could very well be a match. That said, NB-IoT would inherently also be able to support higher bandwidth sensors like camera and OTA (Over The Air) updates pretty much out of the box.
In India Airtel and Reliance have embarked on ambitious NB-IoT programs, but they are yet to have a widely available or open platform for third-party developers to test and play around with.
Also available are the SIMMCOM’s SIM 7000 and 7600 series. And many more.
“OK. Who wins?”
Honestly, it is too early to tell. As I see it, there is perhaps room for multiple LP-WAN protocols to co-exist and find their niches in various applications. True, they would compete on many fronts, but each has some clear pros and cons, at least as of now. Of course, technology evolves at such a fast pace that pretty soon we’d be talking about these technologies in past tense 🙂
But until then, it is clear that NB-IoT is poised to change the way Things communicate and this will add value to the way we humans interact with the world around us.
At Tinkerbee Innovations, our endeavor is to use sensors, IoT, LP-WAN, ML and Analytics to build solutions that #maximizeYourLife and we have only just begun! In case you are interested to explore this new world with us, do write in to email@example.com
We welcome your views and suggestions and hope that this post was an informative starter: A plain and simple introduction to LP-WAN, LoRaWAN and NB-IoT.
PS: If you are a maker and would like to work with us, do check out our AngelList page
Over the years WiFi has become so prevalent that a hotspot or an access point is the first thing we look for when we reach a location!
Ever since Expressif launched an extremely versatile and low-cost chip that combined a microcontroller and a WiFi module into a single unit called the ESP-8266, there has been an explosion of services using/ built upon the WiFi protocol.
Some of the earliest prototypes of our StockBees were built using ESP-12F modules (that use the ESP-8266).
We love these cute little boards designed by us, that expose an SPI/ I2C bus connector, an 18650 LiPo cell charger, a buzzer for audio feedback and a few switches thrown in. A guitar pick next to it for size comparison.
PS: If you are a maker and would like to work with us, do check out our AngelList page
The decimal number system that we are used to, is called so because it uses 10 distinct symbols to represent any value. These symbols are 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. Any ‘number’ can therefore be represented by combining these 10 distinct symbols.
What is interesting is that, this is not the only way to represent the numbers. I mean, one could imagine a system where instead of symbols 0->9 one used alphabet a->h. So 24 could be written as ce. It is funny, but is perfectly possible. Since most of us are taught to count in decimals right from the time we were kids, and since we usually have ten fingers, this manner of counting seems ‘natural’ to us 😊. (The Octopus, I’m sure, has an octal (base 8) number system ;p)
Another variation would be, if there were 16 symbols instead of just 10! This is exactly what the hexadecimal system does, it has symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f. So, Bryan Adams was singing ’18 till I die’ in a hexadecimal world, he would sing: ’hex one zero till I die’. I hope you get the drift.
The number of symbols available in the system for representing a value is called its ‘base’. So decimal numbers have base 10. Hexadecimal numbers have a base 16.
In a similar fashion, in the binary world, there are unfortunately only two symbols 0 and 1 (and they correspond beautifully into one of the simpler natures of electricity that either flows/ ON state/ 1 or does not/ OFF state/ 0) (or, has the potential to flow or not 😉). So binary numbers are base 2.
Numbers with a particular base are written as a subscript: (X)n
When we deal with decimals or places where the n is obvious, we of course ignore writing it.
This is a great time to be a part of the hardware+software maker movement. The things that you make here are called- Things! Seriously 🙂 and by extension, we have IoT, aka Internet of Things.
There is such a wide variety of low-cost, easy to program and reasonably simple to integrate micro-controllers out there that its literally a bonanza. And, it only keeps getting better! There is a huge community of makers and there are tons of videos and tutorials on the web that can help you know more about anything and everything that you will need to make your own Thing. If you have a problem to solve and a solution in mind, my attempt here is to give you a primer: A meek peek into this wonderful world. Perhaps, in the near future, I will share my notes on a few deep-dives as well.
Almost all projects are of the following nature:
Input(s) → Processing → Output(s)
(OH: cmmon!) This simplified view is a great place to start as we dive a bit deeper. A simple example of such a system could be:
There is a button, pressing it (input) will turn the connected light (output) ON or OFF (processing: toggle light state).
Inputs are read from sensors that either translate some physical phenomenon, human intent, commands or instructions through some physical activity (like a button press), into electrical signals.
Now these electrical signals could be analog. Analog signals are those where a variation of the signal amplitude/ strength (measured in Volt, or ‘potential’) or current (measured in Ampere or ‘flow’) could be measured to infer something about the phenomenon being measured.
The signals could also be digital, where the measured quantity is represented as zeros and ones. So, essentially a set of ON/ OFF pulses (binary representations, that the microprocessor understands) that encode a number/ value say in the decimal system (which we tend to relate to much better).
When the signals are analog; for the computer to make any sense of it, it must be turned into its digital or binary form through what is called sampling. This is done by an ADC (Analog to Digital Converter). Most micro-controllers have on-board ADCs but one could use a separate ADC as well.
So, sampling involves taking a snapshot of an analog signal at a point in time and then the ADC converts that value into a binary (base 2) representation.
To get a meaningful representation of the original signal, the sampling rate must be high enough. So, higher the sampling rate (frequency or the number of times you take the sample reading per second), the better the signal fidelity. For example, if you record your sound using a microphone on a computer, a typical sampling rate would be 22KHz. Which means, the sound card has an ADC that takes 22 thousand samples of your voice signal amplitude from a microphone per second and stores each of them up onboard!
Also, the number of bits that the system uses to represent each discrete signal sample determines its accuracy. For instance, an 8 bit sampler can represent 2⁸ or 256 different signal values (from 0000 0000 to 1111 1111). Assuming we are measuring a maximum of 1 Volt using this sampler, this gives a resolution of 1/256 or 0.0039V. So this system is capable of measuring variation in voltage as small as 0.0039V or more up until 1V but if the increment in the voltage is less than that, it will not be able to measure it. Now, if the sampler supported 10 bit, that would give 2¹⁰ or 1024 possible values between 0V and 1V. This translates to a resolution of 0.0009765V ! This is evidently a lot more granular than an 8 bit sampler.
So, the higher the sampling rate and the higher the number of bits returned per sample, the closer will the digital signal come to represent the analog one that it just attempted to digitize. So, theoretically, what it means is that digital always loses ‘some’ of the original analog signal during the process of conversion, however hard it may try. But then, these losses tend to be negligible in high fidelity systems and once the signal is digital, well, its food for a microprocessor and you can do a zillion things with it that you may not have been able to pull off when it was in the analog form.
So what kind of sensors are available? Well, a whole lot. Here’s an indicative list:
a. Temperature b. Pressure c. Sound d. Vibration e. Location f. Motion g. Magnetism h. Touch i. Light j. Humidity k. Presses (buttons)
Just google for ‘x sensor’ where x is the thing you wish to measure/ read as an input and you should get to some sensor that does the job for you. There are some sensors that come with an ADC built in, so you could read their output straightaway.
For systems to be useful, they need to be able to present their output in some form. Where humans are involved, these better be human readable/ interpretable forms. This is where output systems come in.
Simply put, the output systems do a reverse of what the input systems do. They translate the digital output values into some physical phenomenon, like: displaying them on a screen (light), voice output or creating a buzzer sound. At times, they need to be sent to other systems which will take them as inputs. One example would be if this needs to be transmitted to a server so that a log could be maintained, or an email sent!
The following is an indicative list of output systems:
Consider a weather station which needs to measure, temperature, humidity, wind-speed and location. Evidently, more than one input is necessary here. And you might need to convert the read inputs into some other scale and notify a central system periodically and raise an alarm if there is a huge variation in any of the input values. Or combine multiple inputs to predict whether a storm event is likely. This needs a processor. A processor, in simple terms is something that can execute a defined set of operations on given inputs. Microcontrollers have a ‘computer’ inside them. Usually they are RISC (Reduced Instruction Set Computer), which means that it may not have a whole set of capabilities that your desktop computers have, but they would have just enough to pull off your job. So, now you need to choose a suitable microcontroller for it.
These are a few important considerations while narrowing down your choices:
a. Number of inputs you will need and their types (eg: 2 analog inputs and 3 digital inputs) b. The voltage levels of the input sensors c. Physical size limitations d. The amount of processing required e. Number of outputs you will need (eg: LCD display + WiFi + SD card logging) f. Power requirement (access to mains power? Long battery life? Low cost of power? Solar?) g. Ease of prototyping and scaling
Once you have narrowed down your requirements, just search the net to find the suitable options.
What options do you have, as of end-2017 (what I love and know for sure is that this list is bound to get obsolete soon 😊)
For more complete systems that can interface with a set of peripherals like keyboards, mice and monitors. Their most recent addition is Raspberry Pi Zero W which has HDMI out, WiFi and Bluetooth, costing about Rs. 1500 (all connectors etc included!).
A wide variety of choices. These guys literally did set the maker movement on fire by introducing a simple programming interface and easy to prototype input/ outputs. Most of them use ATMEL processors.
Low power BLE capable devices from Nordic Semiconductors are slightly difficult to program but are powerhouses enabling a revolution in portable devices like smart-bands and other wearables. Most of these use ARM Cortex processors.
There are a lot more options out there, I’ve only picked up my favourites that get most of the jobs done. Of course, we have folks like Intel playing catch-up with their Edison series et al, but, the kits above rule the roost.
So, there you are; You have now been introduced to programmable things! Go figure and make!
PS: If you are a maker and would like to work with us, do check out our AngelList page