Monthly Archives: August 2017

Blockchain In IoT: The Future Of Smart Connectivity

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Blockchain in IoT

 

Internet usage, connectivity, and the digital infrastructure, as we know it, are changing. The value of the global IoT (internet of things) market is set to zoom past the $3 trillion mark by the end of 2020, with more than 20 billion connected devices being in active use. However, this growth is not going to be all smooth, and challenges – particularly in the form of cybersecurity threats and the ever-increasing volumes of data/network connections – are bound to crop up. A recent IDC report has predicted that 9 out of every 10 firms that have already implemented IoT standards are likely to face security breaches this year. The blockchain distributed ledger technology – which powers bitcoins and have been hailed as the ‘internet of value’ – can go a long way in refining IoT, cutting out key performance issues, and ensuring greater security. In today’s discourse, we will put the spotlight on how blockchains can help in making IoT stronger:

  1. Moving on from a centralized system

    Irrespective of the particular nature of an application, IoT typically depends on a central cloud server/gateway for device identification, authentication and data transfer. Now, as the domain of internet usage expands across industries – establishing such gateways is likely to prove problematic, particularly in remote areas where the connectivity or signal strength is poor. A blockchain is, by definition, decentralized, and it does away with the need for such centrally located servers. Instead, data resides in all the ‘nodes’ of the distributed, trustless network – ensuring smoother, autonomous operations.

  2. Making smart devices ‘smarter’

    In early-2015, IBM and Samsung collaborated to launch the decentralized ADEPT (Automated Decentralized P2P Telemetry) platform. It was tested on a ‘connected washing machine’, that was able to track the usage of detergents, place orders, and make bitcoin payments for buying detergents – all on its own. This is a classic example of how blockchain technology can make IoT-powered smart devices well and truly autonomous – with robust self-maintenance, M2M communicability, and peer-to-peer transaction capabilities. In ‘smart homes’, a blockchain-based IoT infrastructure can enhance the efficiency/productivity of the devices, while minimizing the electricity and/or energy consumption levels. Private blockchains can be used to boost the security of ‘connected homes’ – with biometric user-authentication data stored in the network (Australian telecom company Telstra is already doing this). The technology can be used to pull up the performance and reliability of driverless cars as well.

  3. Peer-to-peer data transactions

    The number of connections and transactions through IoT systems is going up at an exponential rate. That, in turn, is resulting in an ever-increasing need for computing/processing power. The usage of blockchains too requires uniformly high levels of CPU performance. The system can manage this issue, by opening up the possibility of buying and selling anonymous data (i.e., data monetization), originating from the connected devices. Apart from all authorized, independent third-party agents, the OEMs and the data providers will be able to perform this data-trading (payments will, of course, be via bitcoins). The prospect of buying and accessing this data would motivate the external parties to provide additional CPU power and invest in digital renewable resources – increasing the strength of the overall blockchain and IoT setup.

Note: The energy generated by the IoT solar panels can be traded in exchange of cryptocurrencies. The corresponding transactions would be stored on the blockchain.

     4. Greater security assurance

Nearly one-fifth of the yearly security budgets of organizations will be accounted for by IoT security expenses in 2020. Concerns over the reliability of ‘connected systems’ have been rising – with reports of data hacks, digital identity thefts and distributed denial-of-service (DDoS) becoming rather alarmingly frequent. Blockchains can easily add an additional layer of security to IoT – since they do not have vulnerable centralized servers, which have been traditionally viewed by malicious agents as single points of attack. With blockchain technology, a mesh network can be created – and risks of ‘data impersonation’ and ‘device spoofing’ will be kept at an arm’s length. The distributed ledger is immutable, ensuring that data/transaction records cannot be modified or deleted by unauthorized hackers. Even if someone goes through the trouble of altering each stage in the overall chain, the process would be too costly and troublesome. A distributed, decentralized control would facilitate higher latency and throughput levels, while ruling out chances of security breaches.

    5. Reduced costs

With billions of connections and trillions of transactions, managing communications through IoT devices is likely to become an expensive affair over time (if it is not so already). The need for establishing full-featured gateways/control centers/servers take up the expenses further. With blockchains, this need for a ‘middleman’ or a ‘central gateway’ is done away with – and hence, significant amounts of hardware costs, protocol costs and communication costs are removed. All communications, right from device details to data exchanges, happen on a direct, peer-to-peer basis. IoT gateways are costly – and they are not required in a blockchain framework.

    6. Trustless messaging and smart contracts

The blockchain technology enables IoT devices to exchange protected, trustless messages among smart devices – making them truly ‘autonomous’. Smart contracts, pre-specifying the rules of the transaction(s) (generally as ‘if-then’ condition statements) can be created between two parties easily, ensuring that operations can be managed remotely – and without the interference of a human agent/centralized brokerage system. For instance, a ‘smart irrigation’ system can be ‘instructed’ to release/stop the flow of water by the field sensors. The trustless messaging system powered by blockchains can be just like the communications in a bitcoin network. The absence of a central control unit also reduces the required processing times and speeds up data exchanges – establishing accelerated data exchanges.

    7. Blockchains as independent agents

Peering into the future, it can be reasonably expected that blockchain networks (and not only the IoT devices included in them) can evolve into completely independent entities. These autonomous, independent blockchains (often referred to as ‘distributed autonomous corporations’, or DACs) will have immense potential of getting adopted in the banking and financial arbitration industry. The components of a decentralized system (say, e-couriers) can gradually replace the centralized human management layer – removing the risks of human errors in the process. A DAC can also send update requests to the underlying software of other, similar independent blockchains. Things can become more automated, and more seamless, than ever before.

    8. Increased scalability

The volume of IoT operations (and along with that, the number of devices, gateways and other smart accessories) will continue to increase in the foreseeable future. The existing systems have to be scaled up on a regular basis – something that is not really possible with a centralized server. Blockchains, once again, offer an easy, and mighty effective, alternative. The distributed ledger system offers easy scalability, and can deliver improved security to the expanding sets of smart gadgets. What’s more, it also become fairly simple to locate a compromised device (for instance, captured in a botnet or infected with malware), and prevent it from putting the health of the entire system (which can be a smart home, an enterprise setup, or even a smart vehicle network) at risk. Additional devices can be supported in a blockchain infrastructure, without any significant need for extra resources.

    9. Transparency and ownership

Blockchain transactions take place after mutual trustless consensus of all the interested parties in the network. A single, secure record of all the transactions is maintained in the distributed ledger. Since tampering with these records is, for all practical purposes, impossible – potential confusions over the ownership of digital assets are ruled out. The level of transparency of the recordkeeping is further enhanced by the fact that each IoT transaction on the platform is timestamped. Individual users and organizations are encouraged by the trustworthiness of blockchains – built by the device information records and transaction/exchange records maintained in the ledger. The communications might be named ‘trustless’ (since the transacting parties are not acquainted, and generally use pseudonyms)…but blockchains actually build trust in IoT frameworks in a big way.

    10. Tracking the history of IoT devices

With billions of smart devices in the IoT ecosystem, maintaining the history of transactions carried out with any one device is a huge challenge. A distributed ledger system can help in this regard. When a IoT system is bolstered by the blockchain technology, participants can view the records of all the data exchanges that have taken place between the concerned device and human agents as well as the internet. History of transactions with other ‘connected devices’ can also be maintained and viewed, as and when required. These records would also offer an insight into the current health and performance potential of devices (something on the lines of ‘predictive maintenance of devices’).

We are well on our way in the journey towards a ‘decentralized, shared future’ with blockchain-powered IoT operations. Being a relatively new technology, blockchains still have several challenges to overcome. For starters, the huge computational powers required for transactions might prove to be a roadblock, while determining the best computational models, establishing the infrastructure, monitoring data access levels and managing the initial costs require close attention too. The ‘51 percent attack’ problem (changes in transaction records can be validated, provided 51% of the blockchain network approve it) is, arguably, the biggest point of concern – particularly when blockchain is used in relatively small IoT systems (say, a home or an office). There is no scope of doubting the importance of blockchains in IoT – but a few rough edges have to be ironed out, for realizing the full benefits of the technology.

 

Planning To Launch A Mobile App? Here’s 12 Things You Need To Know

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

 

Things to know before app launch

 

According to a recent App Annie report, the total number of mobile app downloads (from Apple App Store and Google Play Store) shot up to just a shade under 25 billion in 2017 Q1 – marking a ~15% YoY increase. Global revenues from apps is expected to reach $190 billion by the end of this decade…up by 171% over the 2015 figure. App availability is increasing all the time, with Apple and Google combining to offer more than 5 million downloadable applications. User-spending on app-usage is going up as well (in 2017 Q1, there was a 45% YoY jump).

The above stats might make the task of developing and launching mobile apps seem just about as easy as a walk in the park. However, a look at the number of frequently discarded/uninstalled apps and the lowly engagement rates (across the world, 24% users use an app only once) would prove that things are not as straightforward. You need to frame and follow smart, informed strategies to make your app launch a success. We will here provide some useful tips you should make note of, before releasing a mobile app:

  1. Thorough market research is vital

    A new app won’t exist in a void, and you need to do all the necessary homework carefully, before releasing it. Study user-opinions and trends (you can also conduct surveys online) to find out about the things/functions for which a new application can be built. If you already have an idea about the nature of app you wish to build, check out similar applications from the app store. Find out what the successful apps in that category are doing, and try to come up with ideas on how you can improve on their functionalities. Make it a point to study a couple of the failed apps as well – to know (and stay away from!) the mistakes/problems in them. Identify your target audience first, before proceeding to make an app. Mobile app development needs to be an informed decision, always.

Note: The various ‘Top XX…’ app lists serve as a great reference point for competitor analysis. Monitor the reviews and ratings these apps receive, to get an idea of their best features and/or probable shortcomings.

  1. Chalk out your budget

    You should know something from the very outset…making a mobile app involves significant costs. A basic, no-frills MVP (minimum viable product) with only the most essential features is likely to cost you around $8000-$10000 – and as more features and functionalities are added, the cost figure goes up. To keep the expenses from going out of hand, you need to frame an app development budget from the very outset. Follow the budget at all times, and keep a record of all the expenses during the various stages of the project. If availability of funds is an issue, you can always list your app idea on a crowdfunding site (Indiegogo, Kickstarter, etc.).

Note: Keep in mind the trade-offs involved between the ‘app quality’, the ‘time of development’ and the ‘development costs’. Think of these as the three vertices of a triangle (the so-called ‘App Triangle’). Never try to cut corners in terms of costs – since that would have an adverse effect on your app’s quality.

  1. Hire developers for your project

    Even the best of app ideas can go to waste if the execution is poor. The onus is on you to find and hire a professional mobile app development company that would be able to create the app in just the way you want. Do some research on the web, prepare a shortlist of app agencies, and request for free quotes from each of them. Hire the one that seems most proficient, and get everything (terms of service, contracts, agreements, etc.) in writing. Make sure that the company you delegate your project to has separate teams working on iOS and Android platforms, and expert in-house graphic designers, animators and app testers. In case it is a 2D/3D game app you wish to make, look for app-makers who have ‘relevant experience’ in working with the different game development engines.

Note: If required, the company should be prepared to provide you with non-disclosure agreements (NDA). Also, make sure that the development team can be contacted at any time, and they are willing to work according to your feedback/suggestions. Stay away from app companies that ask for huge upfront payments.

  1. Features of the app

    This one is a tricky affair. Release an app that has only a handful of run-of-the-mill features, and it would be dismissed as ‘too simplistic’. On the other hand, if a newly launched application has too many complex features and controls, most people will not bother to ‘learn’ how it works. As a rule of thumb, include all the ‘must-have’ features of your app in its introductory version (v.1.0) – and schedule the other ‘nice-to-have’ features in the future updates. Understand the precise nature of your app and the likely requirements/behaviour of your target-users, to get an idea of the set of features you need to include in the first version of the app (for example, in a mobile shopping app, the presence of a secure payment gateway is an absolute must). The app should have some uniqueness about it, to get the early-users interested.

Note: There should be a single ‘core purpose’ of your app. It should satisfy the need(s) of your target customers in a better manner than the already existing rival applications.

  1. Decide the platform

    With Apple’s iOS and Google’s Android combining to make up more than 99% of the global smartphone market – you need not think beyond these two platforms while determining the compatibility of your app. However, you need to take a call on which of these platforms your app will be available on first. With a projected ~90% market share, Android is the no-brainer choice if you are primarily interested in getting out your app to as large an audience as possible. However, in case you care more about revenues, you can go with the iOS platform first (last year, Apple made nearly 3.5X more money than Android, with less than half downloads). There are advanced cross-platform app development tools (like React Native or Xamarin or PhoneGap) already available in the market – but you should ideally keep things simple, and start out with one platform first…and then move on to the other.

  2. Start marketing from well in advance

    A jaw-dropping 180 billion apps have been downloaded from the Apple App Store alone (as announced in June). The average smartphone user launches 30+ apps in a month, and around 9 applications every day. Given the fiercely competitive nature of this domain, it makes a lot of sense to start marketing/promoting your app from several weeks before its actual launch. Hire professionals to design an optimized, responsive, user-friendly website for your app – which would provide detailed information about the main features and use cases of the application (include a FAQ section). Publish short, engaging blog posts on the website. Publish news and teaser updates about your app on the various free and paid press release sites. Be diligent with your social media marketing efforts as well. Create dedicated Facebook and Twitter profiles for the app, and post updates/share tweets on a regular basis. Plan out email marketing campaigns. The app marketplace is crowded – and you need to be proactive in making people aware about the existence of your app.

Note: For app marketing on the web, you can also connect with professional bloggers from the relevant category. Find out whether you can do a guest post about your app on such blogs, and/or if the blog-owners can feature your app (and do a short review). Link back such blog pages to the app website and, after the app launches, to the store page.

  1. Monitor app size and battery usage

    If you feel that incorporating as many features and graphic elements as possible in a new app will be a surefire way of increasing its popularity, you might well be very wrong. Too many features typically make an app ‘heavy’ (i.e., too large in size). Smartphone-owners are generally reluctant to install apps that are very big (size running into hundreds of MBs), particularly for two reasons: a) they are perpetually running out of storage space and b) if the connectivity is weak, the download might be interrupted. The average sizes of iOS and Android apps are 34.3 MB and 11.5 MB respectively, and you should ideally keep your app’s size within those limits. Also, pay attention to how your app affects the battery life of the target devices. It is very easy to track the battery-usage of apps installed on a phone/tablet – and if your app causes too much of battery drain, it will be uninstalled soon enough by most users.

Note: Make sure that your app does not eat up too much of bandwidth and mobile data either. You might want to make your app usable offline as well (that would, of course, depend on the app’s nature).

  1. Monetization and analytics

    Unless it’s a college research project, you will want to make money out of your mobile app, right? Before the launch, you need to be very clear about how the application should be monetized for the best performance. In both Apple App Store and Google Play Store, an overwhelming majority of the listed apps are free – and ideally, you should start off with a ‘freemium’ revenue model as well. In other words, your app will be free to download, and users will have the option to upgrade to a ‘premium’ or ‘pro’ version by paying a token amount (say, $1.99). This ‘premium’ version would have more features, zero ads, and other such attractions. In a free app, you need to decide whether the monetization will be done with the help of in-app ads or in-app purchases (IAP), or a combination of both. In case you do go with ads, make sure that the advertisements are not inappropriate for the potential audience, and they do not interfere with the user-end experience (UX) in any way. What’s more, you also need to have a built-in analytics feature for your application. That would enable you to study the behaviour of people while using your app, and the points (if any) where most drop-offs occur. The information would help you in improving the app later.

Note: The average price of an iOS application is $1.02, while that of an iOS game is $0.49.

  1. Stay updated on the latest store/platform updates

    The upcoming iOS 11 platform will not support 32-bit apps. ‘Android Instant Apps’ were announced at this year’s Google I/O conference. While the actual development and coding will be done by the app development company you hire, you need to stay abreast with all the new updates, tweaks and changes in regulations in the Apple and Google platforms. Make it a point to abide by all the clauses mentioned in the Apple ‘App Store Review Guidelines’ and/or Android’s ‘Developer Policy Center’. Ask the developers working on your project to carefully follow the design regulations. Remember, any violations of the app store rules is likely to result in your app submission being rejected.

Note: Since 2015, Android apps are being manually reviewed. The average app review time at the Apple store is 2 days.

    10. Testing and quality assurance

Prior to launch, you need to be absolutely sure that your app has no bugs or performance issues whatsoever. Problematic applications are not likely to be approved (Apple’s regulations are more strict regarding this) – and even if a buggy app makes its way to the store, the consequences can be dire. Early users, on discovering the issues, are likely to leave poor ratings and unfavourable reviews – creating a negative ‘word-of-mouth’ publicity, and hampering its download potential in the long-run. No matter how quickly you release bug-fix updates, this initial damage cannot be undone. In order to stay away from such problems, it is extremely important to test all the features of your app before its submission. Apart from using simulators and emulators, beta versions of the app have to be tested on actual devices, to detect any probable glitches. Your app needs to be of uniformly high quality…otherwise, it is bound to fail.

Note: For beta testing iOS applications, Testflight (with the new 10000-users limit) is the most suitable platform.

   11. App store optimization

On average, 6 out of every 10 app downloads happen through search activities in the app store. That, in turn, implies that if your app is not easily ‘discoverable’, its download figures will remain low. This where the importance of app store optimization (ASO) comes into the picture. Find out how the top-ranking apps in your category are listed, and the keywords targeted in their app store descriptions (in the Play Store, an additional ‘short description’ is required as well). Select the app name carefully, and select an optimized, interesting app icon. If possible, add a tagline to the name of the app, with a keyword included in it. Identify the most relevant search terms likely to be used by people while looking for an app like yours, and use them as keywords (use them in a natural manner in the app descriptions). For Android apps, upload a short, engaging introductory video. Use high-quality screenshots, showcasing the most important screens of the application. Avoid adopting an overtly promotional tone in your descriptions – and highlight the key features (the elements that would motivate people to giving it a try) of the app instead. Getting featured in the App Store can increase the download-count of an app by more than 90% (downloads can jump by 500% in South Korea), and for that, excellent ASO strategies need to be in place.

Note: The name of an iOS app can contain 30 characters, along with a max. 30-character subtitle, 170-character promotional text and the app description. For Android apps, the title can contain 50 characters, the short description should be of 80 characters (max) and the long description should not be more than 4000 characters.

   12. Build the virality of your app

It would be a serious mistake to consider app marketing as a ‘one-shot game’. Promotions have to be done constantly, and offline channels have to be factored in as well. You can plan a ‘beta launch’ or a ‘soft launch’ of your app in select markets – before going ahead with a full-blown release (you will be able to gauge initial opinions, reviews and feedback that way). Provide promo codes for your apps to the ‘power users’ and ‘influencers’, launch referral programs and content-based campaigns to raise the buzz about the application. On social media channels and portals (Facebook, LinkedIn, Reddit, etc.), share the concept of your app and actively seek the opinions/suggestions of your peers/potential users. The trick lies in building up huge hype about your soon-to-launch mobile app…and more importantly, being able to actually live up to that hype.

Work with your app development partner agency to finalize how the automated notifications system of your app would work. Find out how user-queries and complaints will be handled from the backend – and how frequently updates will be released. There should not be any uncertainty over the platform versions (backward compatibility of your app has to be decided) and the devices that the application should work on seamlessly. Follow the above points closely, and enhance the chances of your newly-launched app becoming a big hit significantly.

 

An Overview Of Blockchain Technology (or, the Internet Of Value)

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

 

Overview of blockchain technology

 

Over the last few years, the buzz about blockchains has gone up immensely. In 2015, banks across the globe had invested around $75 million on the technology. By the end of 2019, that figure will have jumped to $400 million – clearly underlying the fact that Satoshi Nakamoto’s (whoever he or it or they is/are) innovative distributed ledger platform is only at an early stage of growth at present – and will gain even more recognition, understanding and popularity in the near future. The mounting interest in blockchains is also reflected by the huge investments made by venture capitalists on companies in this sector. Tech biggies like Microsoft, IBM and PwC have already started to work with the technology, and in today’s discussion, we will take a look at some interesting tidbits about blockchains:

  1. The need for blockchains

    While internet services are essential in a truly ‘shared, secure economy’ – their presence is not quite a sufficient condition. By nature, web-based services are created to manage, store, transfer and monitor ‘information’ – and they are generally not engineered to create ‘value’ (i.e., internet can make business processes more efficient, but cannot change the processes per se). Blockchain, often referred to as the ‘internet of value’ (IoV, anyone?) plugs that gap effectively. Also, unlike traditional internet tools and portals, blockchains do not have any centralized servers – and do not have any fees payable for its services (since there are no intermediaries or so-called middlemen). Blockchains are required for direct, peer-to-peer exchange of value, through a robust digital channel. Implementation of this distributed ledger technology also ensures greater engagement levels (a large cross-section of people cannot afford the services of intermediaries), and also offers greater data privacy and confidentiality.

  2. Understanding a blockchain

    The name might seem rather nerdy, but blockchains actually represent a fairly simple digital technology. To put simply, a blockchain is a one-of-its kind digital ledger or recordkeeping device – that tracks and records all transactions in a network. A new ‘block’ is added to the ‘chain’, every time a new transaction takes place on a particular asset (apart from financial transactions, blockchains are also used to store activities involving cryptocurrencies, retail transactions, medical records, supply chain data, and a host of other types of transactions). Every relevant member on the network can view a transaction (say, between Person X and Person Y), although the two parties actually involved in the transaction might opt to keep their identities hidden (or use pseudonyms). In other words, a blockchain system is just like a public ledger, where the transaction records are distributed to all interested parties. The information chain (with time-stamped blocks) is made secure with public-key cryptography – and no single user can modify or delete or tamper any ‘block’ of information on his/her own.

Note: The initial block in a transaction chain on a blockchain is called the ‘genesis block’ (numbered 0 or 1). The individual blocks are connected to each other with the help of code snippets called ‘hashes’.

  1. Who invented blockchains, anyway?

    Ah, that’s something no one quite knows for sure till now. All that is known is that the first white paper, introducing blockchains, was published on 31 October 2008 – and the first ‘genesis block’ was mined back in January 2009. At first, it was widely believed that a man called Satoshi Nakamoto had invented the technology (he was also credited as the inventor of the bitcoin cryptocurrency). However, the ‘actual Satoshi’ categorically stated in 2014 that he had got ‘nothing to do’ with bitcoins or blockchains. Since then, the names of Michael Clear (cryptography graduate from Trinity College, Dublin) and Craig Steven Wright (Aussie coder and entrepreneur) have surfaced as the real names of Satoshi Nakamoto. There is also a suspicion that the three men who filed the ‘encryption patent application’ (Charles Bry, Vladimir Oksman and Neal King) might well have collectively worked under the pseudonym of Nakamoto. The three have, however, denied this. The brains behind blockchains are still unidentified, and it makes for delightful tech gossip!

  2. The operation of blockchains

    We have already explained the nature and main purposes of a blockchain. Let us now get an idea of how a distributed ledger actually works. The process starts with a transaction request from Entity 1 to Entity 2. A ‘block’ is created to represent that transaction on the network, and that ‘block’ gets automatically distributed to all the authorized, interested nodes. All these other network members have to approve the transaction (i.e., assure the validity of the transaction). Once that is done, the ‘block’ gets added to the ‘chain’, and the transaction now takes place between the two parties. The transaction details get shared on the ledger of all the members of the system (as indelible records). That, in turn, makes the entire system transparent and makes sure that everyone is aware of all the relevant transactions. All forms of digital currency transactions can be recorded on a blockchain – and the system also ensures that a bitcoin is used only once.

Note: In the financial services sector, blockchains have already proved to be instrumental for removing the (often significant) time-gap between transaction and settlement. Disintermediation is the biggest reason for this.

  1. Can blockchains be hacked?

    Anything that uses digital resources can, in theory, be hacked. However, hacking a blockchain is, for all practical purposes, almost impossible – and the technology, hence, makes transactions and big data more secure than ever before. Since blockchain is a ‘distributed technology’ and is completely decentralized, there are no centrally located servers that can be targeted by hackers. The information stored in the system is shared across all the nodes of the architecture, and is present in the computers of all involved data miners. In order to be able to successfully hack a Blockchain ledger, all the records in a chain of transaction have to be separately tweaked (every block is connected to a previous block, creating a chain-like structure). Experts opine that the cost of hacking a blockchain (in terms of invested time and resources) is generally higher than the potential benefits from doing it. Blockchains might not be hack-proof, but it’s the closest thing to being so.

  2. The bitcoin revolution

    The blockchain technology was introduced as a platform for recording transactions of the Bitcoin digital cryptocurrency (released in 2009). Bitcoin transactions are either anonymous or (more popularly) pseudonymous, and transfers are made/received at pre-specified ‘Bitcoin addresses’ (there are no mutual trusts required for the transacting parties). Due to its nature, it is extremely difficult to trace the movement of bitcoins (unlike, say, credit card payments or wire transfers). The distributed ledger is periodically edited by the network, after checking the available balances at different ‘Bitcoin addresses’. New, unconfirmed bitcoin transactions are checked at intervals of ten minutes by ‘bitcoin miners’, who allocate the necessary computing and processing power in exchange of a certain amount of the cryptocurrency. In 2016, this ‘reward’ was slashed to 12.5 bitcoins for every completed block (it started with 50 bitcoins/block; the amount is decreased after every 4 years). With a market capitalization figure of ~$67.5 billion, bitcoin is by far the most popular cryptocurrency in circulation at present. Ethereum (market cap ~$30 billion) occupies the second spot.

Note: The price of one bitcoin is more than $4000 (subject to occasional dips, like the one this July). In comparison, the rate of a unit of Ethereum varies in the $310 – $330 range.

  1. The concept of smart contracts

    In a blockchain’s rule-oriented transactions ecosystem, ‘smart contracts’ take up the role of middlemen, and ensure that everything is optimally automated. Advanced coding goes into the creation of these contracts, along with preset deal workflows, sensor services, distributed apps and custom APIs. The ‘smart contracts’ get triggered whenever certain conditions are fulfilled (for example, blood sugar levels in a medical report, or wattage in an electric meter going above a predetermined level) – and the requisite actions are initiated. From intellectual property management, banking and financial transactions, and 3D printing, to manufacturing and delivery logistics – everything can be efficiently managed by the blockchain ‘smart contracts’. In the distributed ledger, all business rules are pre-programmed by these contracts, and all members of the network are notified of the same.

Note: Solidity is a Turing-complete programming language created by Ethereum. It is used for the purpose of coding smart contracts.

  1. The role of Keys

    For recording bitcoin transactions on a shared ledger, users need to have their unique ‘private key codes’, which serve as their passwords to the blockchain system. Every private key is associated with a specific ‘bitcoin address’ (the key, hence, serves as user-credentials) – and smart contracts can be coded only after a network-member has entered his/her key. The ‘private key’ of users is stored in their respective ‘wallets’. Outside of bitcoin transactions, keys can be ‘private’ or ‘public’, on blockchain systems. In broad terms, a public key can be explained as the tool from which ‘public addresses’ on blockchains are generated (via cryptographic hashing).

  2. Impact of blockchains on employment

    Blockchains are being created and implemented by…well, anyone involved in digital transactions (including IoT), for the verification of the ‘transaction blocks’. Since this verification process becomes automated in the system, the technology can potentially replace a large percentage of the mid-level accountants – who perform the same verifications manually – across the world. What’s more, the digital representation of contracts (i.e., the ‘smart contracts’), can do away with the need for drawing up the same contracts repeatedly, and hence, the need for many lawyers. However, these perceived employment losses would be more than offset by the increased opportunities offered by the new technology – with a strong, well-trained workforce required to manage blockchains and use them in an optimized manner. In a nutshell, digital distributed ledgers would increase the demand for ‘qualified workforce’, while reducing the need for repetitive manual work. Firms are chasing greater efficiency with blockchains – and the technology is being adopted in a wide range of industries.

Note: To know more about the main application areas of blockchains (apart from financial services), click here.

    10. The downside of blockchains

Ross William Ulbricht, the founder of the world’s first cryptocurrency-based illegal ‘darknet market’ called ‘Silk Road’ (for buying/selling drugs), was sent to life imprisonment in 2015. However, the success (albeit, for a limited period) of Silk Road (v.3.0 was pulled down earlier this year) showcased a way in which the blockchain technology can be misused. As previously mentioned, the system allows for anonymous transactions (owner data can remain hidden) – and only the transaction details are entered into the shared ledger. As a result, malpractices and illegal trading with bitcoins can be initiated by shady third-party users. As the blockchain market matures over the next few years, this issue is expected to be resolved. It is a powerful technology, and developers have to ensure that it is not being used for underhand practices.

Blockchain is still a fair way off from becoming mainstream, with the market expected to mature in 2025 (it is currently in an ‘early adoption’ stage). The growth in the interim is all set to be remarkable, with both leading tech giants as well as a host of startups (Slock.it, Enigma, SETL) becoming actively involved in developing/leveraging the technology. To sum up, the open-source blockchain distributed ledger replaces centralized gateways/servers and delivers cutting-edge recordkeeping services for all types of digital transactions. Easily one of the growing technologies to watch out for!

 

 

Blockchain Beyond Financial Services: 13 Applications & Use Cases

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

 

Uses of blockchains in different industries

 

The financial services sector has been the earliest, and one of the biggest, adopters of the distributed ledger technology (DLT) – more popularly known as blockchain technology. Introduced in 2008 by a little-known person/team/entity named Satoshi Nakamoto, blockchains have grown rapidly in recent years – keeping pace with the burgeoning popularity of cryptocurrencies. According to a March 2017 survey, nearly 8 out of every 10 banking institutions have already started creating their unique blockchain architecture – and it has been predicted that around 15% of the major banking institutions worldwide will become active users of the technology before the end of the year. Given the fact that blockchains can potentially bring down the annual infrastructural expenses of banks worldwide by a whopping $20 billion by 2022, this rapid spurt is not surprising.

It is interesting to note though that blockchains are no longer regarded as tools whose utilities are limited to the banking and financial services sector. In 2017, 23% of all finance professionals are likely to invest >$5 million on the technology. That is considerably lower than the interest levels in the manufacturing industry (42% executives plan to invest similar amounts) as well as in the media, tech and telecom industry (27% have the same investment plans). Bitcoin technology is slowly but surely moving beyond the finance industry, and we here take a look at some other interesting applications of the breakthrough DLT:

  1. Logistics management and supply chain auditing

    Blockchains can play a very important role in enhancing the security and efficiency of the storage/transfer of products (perishable goods, in particular). Right from packing and storage, to quality testing and distribution – every activity can be recorded on the distributed ledger, and all concerned parties on the network will be notified about the same. The data that has to be entered by users will vary across the different stages of the supply chain. With blockchains, auditing and establishing the authenticity of each step in the logistics system becomes easier and way smarter than ever before.

  2. Data handling

    The volume of data that is regularly collected (and has to be correctly scanned) by us is expanding rapidly. A blockchain-based ledger setup can ease the overall process of data management by both companies as well as governmental bodies – by recording details of different entries, making the data handling tasks simpler and more transparent, and ensuring uniformly high levels of data security. Since the data records in the blockchain are typically time-stamped, the total cost of data management can be cut down upon (and chances of errors become minimal). The analytics information required for different types of applications can also be supported on a blockchain. The technology helps in compliance-related issues as well.

  3. Blockchain for IoT

    The first signs of integrating the blockchain technology in the domain of Internet of Things (IoT) came along in January 2015, when IBM released a proof-of-concept for ADEPT (in collaboration with Samsung). The concept involved the usage of the underlying design architecture of bitcoin for the creation of a decentralized IoT setup. Last August, Chronicled applied the Ethereum blockchain to create an IoT Open Registry. Real-time analytics can be connected with IoT through the edge nodes, while there are several ways in which the usability and interoperability of consumer products can be improved (the DLT can store the identities of the different goods (which would ideally have advanced NFC chipsets)). Predictive maintenance is yet another field in which IoT and blockchain technology can be effectively combined. The latter can be used to detect probable glitches/damages (and generate warning notifications) in IoT devices. The probabilities of hack attacks, hence, would be significantly reduced.

  4. Online voting

    In early-2016, it was announced that a decentralized distributed ledger system would be used in the official e-Residency platform (for companies listed on Tallinn Stock Exchange) in Estonia (the announcement was made by the republic and Nasdaq). Blockchains have the potential to play a key role in auditing the ballot boxes in e-voting, thereby recovering much of the credibility of voting systems. A single coin, representing ‘one vote’, will be assigned to each end-user (who would also have his/her credentials in a ‘wallet’). The coin can be ‘spent’ (i.e., the vote can be cast) only once. Apart from keeping fraudulent practices in check, a blockchain-supported voting infrastructure would also be less exposed to online security threats.

  5. Electricity trading

    Blockchain technology does away with the need of middlemen – and that is one of the biggest reasons behind its burgeoning popularity in peer-to-peer (P2P) electricity trading applications. In a ‘transactive energy’ setup, each user will be able to trade (buy/sell) power with neighbors, securely, promptly, and without any hassles. The implementation of blockchains makes the overall ‘power trading’ system high-fidelity. Distributed ledger technology is also witnessing healthy adoption for the exchange of RECs (renewable energy credits). There are several other use cases of blockchains in the power and energy sector, like validating energy trades, managing smart grids, analyzing and benchmarking big data, and trading ‘green certificates’. Metering local energy generation points also becomes easy.

  6. Real estate management

    With the help of a high-performance software-as-a-service platform, the tasks of recording tracking and transferring information on real estate deeds can be facilitated (that is exactly what Ubitquity does). Real-time notifications about all transactions (as and when they happen) are sent to all the relevant members of the network – ensuring complete transparency and communicability. Professional mortgage companies can also get benefits by developing and implementing blockchains. What’s more, the technology can be integrated with ‘smart home’ systems, to monitor the critical parameters of assets.

  7. Ushering in Sharing Economy 2.0

    The distributed ledger technology is, by definition, decentralized, which makes it a great tool for taking the ‘sharing economy’ to the next level. Practically every form of digital information can be supported and stored on blockchains, and activities (irrespective of their scale) can be easily monetized. That, in turn, opens up the possibilities of various types of direct, peer-to-peer transactions – like electricity trading (discussed above), hiring cabs (without the presence of middlemen like Lyft or Uber), data-sharing and providing advisory services. In an ecosystem managed by blockchains, people will be able to seamlessly connect with each other and perform direct transactions.

Note: Blockchains are also being used to deliver endorsements, and for the ranking and verification of online reputation of users. ‘The World Table’ and ‘ThanksCoin’ have emerged as major players in this niche.

  1. User identification, authentication and security

    The need for maintaining robust digital security standards is not limited to transactions in the financial sector alone. The open-source distributed ledger technology is expected to make the task of identifying and authenticating users a lot more efficient. All that individual users have to do is create unique identities on the blockchain network, to manage/control the nature and accessibility of their personal data. The technology will assign a ‘digital ID’ to each user – which would help in tracking all transactions of asset(s) in future. Credit report malpractices, cases of online identity thefts and cyber frauds will all go down with the progressive implementation of the blockchain technology. User-identification with the help of biometry tools (like fingerprint identification) is also being facilitated by underlying blockchains. A case in point for this is UniquID.

  2. Managing intellectual property rights

    A recent report revealed that, illegal downloads bring down the total proceeds from online music sales by ~85%. Other forms of digital art also face similar risks. To protect the intellectual property rights (IRP) and ensure that optimal returns are obtained from the art collections, the blockchains might well be the best tools. On the network, owners of music and other artworks can upload their stuff, provide a watermark for establishing ownership, and manage/track the transfer of these digital art items. In essence, the availability of a virtual decentralized ledger allows artists to control how, where and when their creations are being used/transferred/deployed at all times. The movement of digital art in a Blockchain framework is similar to bitcoin transactions, and there are no chances of IRP violations.

Note: Grammy-winning musician Imogen Heap has created a music-streaming platform powered by blockchain technology. It is called Mycelia.

    10. Establishment of decentralized social networks

This is one of the newest domains in which blockchain technology has started to make its presence felt. Unlike Facebook or Twitter or any other currently popular social media sites, a decentralized social network (DSN) would not involve any centralized, controlling company/entity – and user privacy will be much higher. The same software can be used for connecting multiple servers in a DSN. Synereo, DATT and Diaspora are some examples of the decentralized social networks that are coming up. Many experts feel that social networking will become more and more decentralized over the next decade or so – and if that is indeed the case, the role of blockchains over here will become increasingly prominent.

    11. Blockchain in healthcare

Infosys has already identified the medical sector as one in which distributed ledgers will have a strong role to play in the foreseeable future. E-medical records can be made more accurate and secure than ever before – and there will be no intermediaries involved for their maintenance. Blockchain’s superior handling of these records is likely to help in the creation of smarter health information exchange (HIE) models. In addition, the new technology has the power to enhance the interoperability of different health records, as well as ease out the processes of testing proof-of-concepts and conducting medical experiments. In a blockchain-supported healthcare system, the patient (with his/her information) is always at the core.

   12. Gold/silver bullion trading

A user-friendly investor platform has already been introduced by The Real Asset Company – for the purchase/sale of gold and silver bullion safely and effectively. In the commodities market, blockchains add a definitive edge, by creating secure online accounts for the purpose of buying/holding precious metals. New cryptocurrencies backed by gold/silver can also serve two key purposes: i) create an additional layer of transparency and security on top of the vaulting setup and investments, and ii) facilitating the return of these precious metals in the global monetary system (Goldbloc is a cryptocurrency that does this).

Note: Blockchains have also proved to be useful for tackling illegal activities in the diamond trading business. On a decentralized, immutable ledger, all diamond identification records and transaction details are registered. The ‘digital passport’ of diamonds (as assigned by Everledger) ensures their authenticity and helps to create trackable footprints.

   13. Role in smart agriculture

In several previous posts, we have discussed different aspects of smart, IoT-supported agriculture. This sector can also feature extensive application of the blockchain technology over the next few years. While IoT can boost productivity and yield quality in a big way, distributed ledgers can bring about improvements at various points of the agricultural ecosystem – from establishing fair-trade practices and in-depth auditing standards, to ensuring data integrity and round-the-clock compliance. The overall value chains in the primary sector should become more efficient, visible and fair, with the disruptions caused by blockchains over here.

Earlier this year, the International Airport Review has mentioned blockchain as one of the ‘six technologies to revolutionize the airport and aviation industry in 2017’. The technology can play important roles in the job markets (including support for recruitments), for creating cutting-edge network infrastructure and distributed applications, in the worldwide gaming industry (including legal gambling) and for making market forecasts more accurate. Bitnation, the first ever blockchain-technology-powered jurisdiction setup (it is, in essence, a ‘virtual nation’ in itself) was created back in 2014 – and it has citizens and stakeholders all over the globe. Blockchains are, of course, still very important in the financial services market – but their adoption in other fields is growing fast.

 

Precision Agriculture: Top 15 Challenges and Issues

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Smart farming challenges

 

In the last five years or so, the total volume of investments on the agricultural sector has grown by a massive ~80%. According to experts, precision agriculture (the technique of optimizing existing inputs and fertilizers, tillage tools, fields and crops, for the purpose of improved control and measurement of farm yields) has the potential of playing a key role in meeting the incremental food demands of the growing population worldwide. A recent report estimated the value of the global precision farming market at the end of this decade at around $4.6 billion – with the CAGR between 2015 and 2020 being just a touch under 12%. In the United States alone, the market for smart agriculture software is likely to jump by more than 14% between now and 2022. However, the actual growth and proliferation of precision farming has not been as robust as was expected earlier. The sector faces several key challenges, and we turn our attentions on them in this post:

  1. Interoperability of different standards

    With more and more OEMs coming up with new and innovative agricultural IoT tools and platforms, interoperability is rapidly becoming a point of concern. The various available tools and technologies often do not follow the same technology standards/platforms – as a result of which there is a lack of uniformity in the final analysis done by end users. In many instances, the creation of additional gateway(s) becomes essential, for the translation and transfer of data across standards. As things stand now, precision agriculture (while evolving rapidly) is still, to a large extent, fragmented. The challenge lies in transforming the smart standalone devices and gateways to holistic, farmer-friendly platforms.

  2. The learning curve

    Precision farming involves the implementation of cutting-edge technology for bolstering crop growth. For the average farmer, setting up the necessary IoT architecture and sensor network for his/her field(s) can be a big ask. It has to be kept in mind that the room for error in a tech-upgraded ‘smart farm’ is minimal - and faulty management (a wrongly pressed valve here, forgetting to switch off the irrigation tank there, etc.) can be disastrous. Getting farmers thoroughly acquainted with the concept of smart farming, and the tools/devices involved in it, is of the utmost importance – before they can actually proceed with the implementation. Lack of knowledge can be dangerous.

  3. Connectivity in rural areas

    In many remote rural locations across the world (particularly in the developing countries, although several locations in the US suffers from this as well), strong, reliable internet connectivity is not available. That, in turn, thwarts the attempts to apply smart agriculture techniques at such places. Unless the network performances and bandwidth speeds are significantly improved, implementation of digital farming will remain problematic. Since many agro-sensors/gateways depend on cloud services for data transmission/storage, cloud-based computing also needs to become stronger. What’s more, in farmlands that have tall, dense trees and/or hilly terrains, reception of GPS signals becomes a big issue.

  4. Making sense from big data in agriculture

    The modern, connected agricultural farm has, literally, millions of data points. It is, however, next to impossible to monitor and manage every single data point and reading on a daily/weekly basis, over the entire growing seasons (neither is it necessary). The problem is particularly bigger in large, multi-crop lands and when there are multiple growing seasons. The onus is on the farmers to find out which data points and layers they need to track on a regular basis, and which data ‘noise’ they can afford to ignore. Digital agriculture is increasingly becoming big data-driven – but the technology is helpful only when users can ‘make sense’ of the available information.

  5. Non-awareness of the varying farm production functions

    In-depth economic analysis needs to complement internet tools, to ensure higher yields on farms. Users need to be able to define the correct production function (output as a function of key inputs, like nutrients, fertilizers, irrigation, etc.). Typically, the production function is not the same for all crops, differs in the various zones of a farm, and also changes over the crop/plant-growth cycle. Unless the farmer is aware of this varying production function, there will always remain the chance of application of inputs in incorrect amounts (spraying too much of nitrogen fertilizer, for example) – resulting in crop damages. Precision agriculture is all about optimizing output levels by making the best use of the available, limited inputs – and for that, the importance of following the production function is immense.

  6. Size of individual management zones

    Traditionally, farmers have considered their entire fields as single farming units. That approach is, however, far from being effective for the application and management of IoT in agriculture. Users have to divide their lands in several smaller ‘management zones’ – and there is quite a lot of confusion regarding the ‘correct’ size of these zones. The zones have to be divided with respect to the soil sampling requirements (different zones have varying soil qualities) and fertilizer requirements. The number of zones on a field, and their respective sizes, should depend on the overall size of the growing area. There is not much of reference work for the farmers to go by, while trying to divide their lands in these zones. As an alternative, many farmers continue to follow uniform fertilizer application and/or irrigation methods for the entire farm – leading to sub-optimal results.

  7. Barriers to entry for new firms

    Although precision farming has been a subject of considerable interest for several years now, the concept is still relatively ‘new’. As such, the big hardware/software manufacturers that entered this market at an early stage still have a definite ‘first-mover advantage’. The lowly competitiveness of the market can prevent new firms from entering this domain – with the existing big firms retaining a stranglehold. Farmers can also face problems while trying to migrate data streams from an older platform to a newer one, and there are risks of data loss. The resources and platforms provided by a big player in the agro-IoT sector might not be compatible with those provided by a smaller OEM – and that might prevent the latter from having enough clients.

  8. Lack of scalability and configuration problems

    Agricultural farms can be of different sizes. A single owner can have a large crop-growing land, along with several smaller lands. In India, nearly 33% of the total area under agriculture is accounted for by only 5% of the total number of farms – clearly highlighting the uneven nature of farm sizes over here. A farmer needs to be provided IoT tools (access points, gateways, etc.) that are completely scalable. In other words, the same technology should be applicable, and the same benefits should be available, on a large commercial farm as well as a small piece of personal garden/crop land. The need for manually configuring the setup and the devices is yet another probable point of concern. For agriculture to become truly autonomous, the technology should be self-configurable. The recent surges in artificial intelligence and M2M learning opens up the possibility for that.

  9. Energy depletion risks

    A lot has already been written about the environmental advantages of switching over to smart agriculture (precision farming is ‘greener’). However, the need for powerful data centers and gateways/hubs for the operation of the smart sensors and other gadgets can lead to heavy energy consumption – and more resources are required to replenish that energy. What’s more, the creation of new agricultural IoT tools also has an effect on the energy sector. Not surprisingly, companies have started to focus on farming technology platforms which do not cause too much of energy depletion…but there is still am long way to go in this regard.

  10. Challenge for indoor farming

    Most precision agriculture methods and resources are optimized for conventional outdoor farming. With the value of the global vertical farming industry projected to go beyond $4 billion by 2021, more attention has to be given on technology support for indoor farming. The absence of daily climatic fluctuations and regular seasons have to be taken into account, while coming up with smart indoor farming methods. The nutritional value of the outputs must not get adversely affected in any way either. Farmers need to be able to rely on the technology to create the optimal growing environment (light, temperature, water availability) for indoor plants.

  11. Technical failures and resultant damages

    The growing dependence of agriculture (or anything else, for that matter!) on technology comes with a potentially serious downside. If there is a mechanical breakdown in the hardware, or a farming IoT unit/sensor malfunctions – serious crop damages can be the result. For example, in case the smart irrigation sensors are down, plants are likely to be underwatered or overwatered. Food safety can be compromised, if the technological resources in the storage area(s) are not functioning. Even a few minutes of downtime due to a power failure can have serious consequences – particularly when backup power is not available.

  12. Mounting e-wastes

    Farms powered by smart technology have (in various extents) done away with the problems of runoff, contamination, and other channels of ecological damages. Carbon dioxide emissions have been brought down significantly (~2.0 GHt in a five-year span) as well. A new risk has cropped up though – in the form of electronic wastes (e-wastes). In 2013, the total volume of such wastes was in excess of 52 million metric tons – and the piles of discarded IoT tools and computers and outdated electronic devices are compounding this problem further. In a nutshell, the regular hardware upgrades are making the older units obsolete – and in many areas, dumping them is causing landfills. For things to be sustainable, proper arrangements for the disposal of e-waste have to be made. Soon.

  13. Loss of manual employment

    On average, 4 out of every 10 members of the global workforce are employed in the primary sector. The figures are particularly high in Oceania, Africa and Asia. As IoT in agriculture becomes more and more mainstream and things become automated – a large percentage of this agricultural labour will lose their jobs. The other sectors need to have the capacity to absorb this workforce (now rendered jobless) – and in many of the developing/underdeveloped countries, the economy is not strong enough for that to happen. There is no scope for doubting the benefits that precision agriculture brings to the table – but the large-scale displacement of manual workers can lead to dissatisfaction among people.

  14. The security factor

    The presence of malware and data thefts is a risk in practically all types of ‘connected systems’, and smart agriculture is not an exception from that. As the count of middleware technology, endpoints and IoT devices in active use in agriculture is increasing, the number of entry-points for malicious third-party programs is going up as well. Since the third-party attacks on a complex IoT system are often decentralized, detecting and removing them emerges as a big challenge. The situation becomes more complicated due to the propensity of many farmowners to opt for slightly cheaper devices and resources, which do not come with the essential safety assurances. The multiple software and API layers can cause problems as well. There is an urgent need for tighter security and provisioning policies for agricultural IoT – to make it more acceptable for users.

  15. Benefits not immediately apparent

    To get the motivation to invest on a ‘new technology’ like smart farming, users (understandably) would want to get an idea of the ROI from this technology. Unfortunately though, there is almost no way to guesstimate the benefits of precision farming over the long-run – and the benefits do not become apparent from the very outset. For this very reason, many landowners still view the use of advanced technology in agriculture as ‘risky’ and ‘uncertain’, and stay away from adopting it. With greater familiarity with agritech and comprehensive training, such fears should go away.

Smart gadgets that merely provide information about the extent of crop damages are of little use – and there is need for more ‘predictive maintenance’ tools, that would be able to anticipate damages, and help farmers avoid the same. Customization of the sensors and resources to meet the varying nutrient/water/pest control requirements of different plants is a challenge, as is getting together and comparing data from multiple farms. Farmers need to have a complete knowledge of the correct ‘nutrient algorithms’, so that the platforms/gateways can be configured optimally. There is also room for cutting down the rather frequent ‘yield map errors’, which lead to faulty output estimates.

 

The concept of precision agriculture is based on four pillars – Right place, Right source, Right quantity and Right time. It has already made a difference to agriculture and farm yield performance worldwide…and once the aforementioned challenges are overcome, its benefits will become more evident, more sustainable.

Farming 2.0: How Does IoT Help Agriculture?

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

 

Role of IoT in smart agriculture

 

The degree of mechanization in agriculture is going up rapidly. At the turn of the century, none of the 525 million farms across the world had sensor technology (or, for that matter, IoT in any other form). Cut to 2025, and we will witness more than 620 million sensors being used (considering the same benchmark of 525 farms). The growth and proliferation of agricultural internet of things (Agro-IoT) is expected to pick up even more pace from then on – with ~2 billion smart agro-sensors expected to be in active use by 2050. Between 2017 and 2022, the agricultural IoT market is set to expand at a mighty impressive CAGR of around 16%-17%. In what follows, we will put the spotlight on the role of IoT in agriculture and analyze how smart technology is helping the sector:

  1. Boost to precision farming

    Traditionally, the agricultural sector has been fraught with risks. There are plenty of factors, ranging from rainfall forecasts and improper irrigation, to faulty planting/harvesting methods and poor soil quality, that can have adverse effects on overall productivity. Agricultural IoT offers farmers a great way to stay at an arm’s length from such uncertainties. With the help of advanced agro-sensors, users can get real-time, highly accurate data from their fields – on the basis of which key decisions (‘when to irrigate?’, ‘when to harvest’, etc.) can be taken. Round-the-clock access to all relevant information minimizes the chance of crop losses, and also helps growers make better, more well-rounded farming plans. With the growth of precision agriculture, the concepts of site-specific crop management (SSCM) and satellite farming (SF) are coming into the picture.

  2. The role of big data in agriculture

    In 2014, an average agricultural land had less than 200000 data points. By 2050, that figure will jump to 4 billion data points – a testimony of how quickly ‘connected farms’ will be growing during this period. In the realm of data-driven agriculture, it is increasingly becoming easier to track and monitor important parameters, like soil quality, plant nature and health, pest infestations, fertilizer usage, state of agricultural machinery, storage facilities, and a host of other factors. The better handling of chemical fertilizers, along with smart irrigation management, offers up environmental benefits as well. In essence, IoT in agriculture can very well be termed as a ‘necessary innovation’ – the technology has potential to boost both the quality and quantity of crop yields.

Note: According to an OnFarm report, integration of IoT can bolster yields by nearly 2%, bring down water-wastage by ~7%, and also cause significant energy savings (per acre).

  1. Arrival of agricultural drones

    Unmanned aerial vehicles (UAVs) are playing an increasingly important role in smart farms. People can use these farming drones to track soil and weather conditions (like sensors, they can work in collaboration with satellites and other third-party tools), as well as create detailed 3D maps of the fields. The 3D geomapping technique is particularly useful for quickly detecting existing inefficiencies in the field, and taking corrective measures immediately. Monitoring the crop life cycle and performing a supervisory role (very important in relatively large farms, where manual supervision is difficult) feature among key functions of agricultural drones. The value of the worldwide agro-drone industry is already well over $32 billion, and the figure is expected to climb sharply over the next half a decade or so.

  2. More efficient irrigation

    Lack of proper water management has been a long-standing bane of the primary sector. As highlighted in a previous post, close to 60% of water released for agriculture gets wasted – due to overwatering, runoffs, contamination, and other related issues. What’s more, instances of crops getting damaged as a result of under/over watering is also fairly common. Once again, such problems can be effectively tackled by farmers by upgrading their fields to the IoT platform. Right from tank-filling & management and valve operations, to chalking up optimized irrigation sessions/schedules – everything can be performed via advanced Sensor Observation Service (SOS) tools. The irrigation requirements of crops are estimated carefully, along with the moisture content of the soil (also, the acid content). That, in turn, helps in efficient utilization of the limited water resources (a key factor in drought-prone locations). As per reasonable estimates, integration of smart irrigation tools can save up to 50 billion gallons of water annually.

  3. Support for indoor farming

    The growing adaptation of IoT tools and software among farmers across the globe has opened up excellent opportunities for intensive indoor farming. The overall growing area can be divided into small environments, under specific growing conditions, and an open-source platform is used for the collection and instantaneous sharing of that data. The data (which includes temperature, humidity, dissolved oxygen and carbon dioxide in air, and several other critical measures) from one such environment is used to create a ‘climate recipe’ – which can then be followed for growing crops on other, similar indoor environments. Farmers have the opportunity to artificially set up conditions that would be conducive for the growth of any particular set of crops (an artificial drought, for example). Indoor farming with computers and internet services offers a high level of precision, and there hardly remains any scope for manual errors or natural elements playing spoilsport.

Note: The indoor farming methods initiated by the OpenAG Initiative uses growing environments named ‘personal food computers’.

  1. Remote management of crops, field, equipments

    It is next to impossible for farmers to manually check the health and condition of all the crops in their farm(s). Problems associated with excessive soil dryness, problematic agricultural equipments and other on-field inefficiencies can crop up too – and if these are not detected and rectified quickly, substantial loss in productivity is likely to be the result. IoT tools and smart sensors typically work as ‘middleware technology’ support, for managing all types of farm resources and connected devices on the same platform. Real-time data from the fields is relayed to a central gateway/microcontroller – and it becomes accessible to farmers through a dedicated mobile application on their smartphones. Technology enables users to keep track of what is happening on their farms on a 24×7 basis, irrespective of their precise locations at any time. Monitoring crop health or the performance of farming equipments remotely is no longer a challenge.

  2. Smart tractors get rolling

    Self-driving tractors have already started to revolutionize modern farms. These tractors (launched by companies like John Deere and Hello Tractor) are connected to the World Wide Web via built-in sensors, and can be guided by the farmers with the help of GPS navigation technology. Apart from generating crop and soil data, these high-tech tractors can help in automatic weeding and spraying of pesticides. In fact, the sensors in autonomous farm tractors can actually analyze the components in liquid nutrients, and hence, make sure that the spraying is done in the right amounts. To deliver optimal benefits, a smart tractor should be fitted with a spectrometer, a high-power infrared camera, a small computer, and a fluorescence-measurement tool for chlorophyll monitoring (in addition to, of course, the GPS receiver).  Automated tractors are still comparatively new, and they are likely to become more powerful in the foreseeable future.

Note: The growing popularity of Rowbots (for nitrogen fertilizer application on corn fields) and ‘Bonirob’ (crop inventory tracking robot) serve as classic examples of the expanding usage of robotics in agriculture.

  1. Boosts to poultry and fish farming

    The positive impacts of IoT integration in farming is not limited to crop-growing only. The fish-farming industry has been identified as one of the subdomains where technology can help in a big way. Thanks to the real-time water quality, food and stock monitoring systems and the data generated by them, farmers can take smarter, better decisions. In addition, it has also become easier to detect and treat diseases. Poultry farming is yet another area of activity where smart technology is finding widespread adaptation. Treatment of wastewater and hatchery management are two of the several activities that are becoming mechanized in this sector.

  2. Fighting pest infestations

    Specialized pest control sensors are being made by OEMs, to cut down on crop damages caused by fungi and other pests. These tools typically scan and inspect agricultural fields, and identify plant growth patterns, before identifying pest-infected problem areas (if any), enabling farmers to treat them as quickly as possible. Environmental parameters are factored into the information generated and transferred by these sensors. Thanks to the advancements of IoT practices in agriculture, it is also possible to track previous records of on-field pest infestations. Chances of crop losses due to pests, and consequent heavy financial losses to the concerned farmer, are gradually becoming things of the past.

  3. Smarter livestock management

    The concept of ‘connected cows’ has generated a lot of buzz and speculation over the last few quarters. There is already an application called eCow, which can efficiently track temperature and pH levels with the help of a rumen bolus (on a daily basis). In general too, IoT has started to help farmers in managing the animals on their farms, via embedded systems that track a wide range of pertinent information (apart from the GPS location every animal), like activities, pulse rate and temperature, tissue conditions, and other critical biomedical statistics. Since live locational information becomes possible, it also becomes easier to create geofences. The feeding routine can also be automated, while users can monitor the produce regularly. Also, web-enabled livestock monitoring systems facilitates quick detection of animal diseases (and the required treatment), identification and separation of the sick animals from herds, and timely information on animals that pass away. Creating multi-featured wireless bolus with Bluetooth support that would last the entire lifespan of the animals (fitting them with sensor collars is not a viable option) is a challenge, as is ensuring the accuracy of the data generated. In big game fields, monitoring animals of endangered species (e.g., rhinos) has also been made easier than ever before by connected technology.

Note: A lot of time can be saved, if a farmer can track the position of his/her farm animals on a computer/handheld device at all times.

    11. Food safety and logistics

The need for steadily increasing agricultural productivity to support the ever-growing global population has been well-documented. Till now, there have been many instances of perfectly healthy crops being harvested – only for them to get damaged and wasted due to improper storage and/or poor transportation/logistics facilities. With IoT monitoring systems, farmers can finally stamp down on such risks. These systems record the temperature, moisture and other conditions in the storage facilities, along with shipping timings, duration of travel, the overall logistics infrastructure, and the transports being used for crop transfer. All records from these systems are stored in the cloud, enabling users to access the same, as and when required.

   12. Predictions, forecasting, and failure avoidance

Even with full-fledged IoT integrations, agriculture is not going to become a completely ‘fail-safe’ sector. However, technology has been instrumental in lowering all types of risks as much as possible – on different fields, and for crops of practically all types. The innovative multidevice tracking/monitoring systems help in drawing up in-depth livestock and crop analytics, and a reliable failure-prediction setup (due to unfavourable soil or weather or crop health or pests or irrigation). In smart precision agriculture, more and more farmers are switching over to IoT backed models that provide accurate weather/rainfall forecasts.

Note: IoT integration can increase the performance of both horticulture and greenhouse farming, through wireless sensors and smart applications.

The stage is all set for agricultural IoT to revolutionize farming activities, taking average performance levels up by a couple of notches. There are some temporary bottlenecks, emerging from things like the frequent lack of compatibility/interoperability between sensors from different platforms, the sheer volume of big data generated (handling them can be tricky for the average farmer) and the still-existing doubts in the minds of many farmowners. As soon as these minor hitches are ironed out, the favourable effects of IoT on agriculture will become even more evident.

 

The Rise Of Climate-Smart Agriculture: An Analysis

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Among the many factors that have potentially damaging effects on agricultural outputs worldwide, and consequently on farmers, the issue of climate change (CC) has got to be the most serious. Defined as ‘identifiable changes in prevailing climate (statistically testable) that persist over extended periods of time, usually decades’, CC causes fluctuations in temperature levels, deterioration of soil quality, probable decreases in the quality of yields, rise in atmospheric carbon dioxide, and can even bring about wholesale changes in yearly growing seasons. At present, close to 600 million farms across the globe are struggling to cope up with the challenges posed by climate change. It has been estimated that, by 2050, CC will lead to a 11% fall in agricultural output levels, and a whopping 20% rise in average prices. With an eye on improving the sustainability of agriculture, the need of the hour is a gradual reduction of the over-reliance of this sector on climatic factors. That, in turn, brings us to the topic of ‘climate-smart agriculture’, or CSA:

  1. The extent of the problem

    Agricultural yields have traditionally depended on the prevailing climate parameters (air temperature, sunlight, humidity, rainfall, etc.). This reliance has always added an air of uncertainty to farming, and has often caused much grief to farmers across the world. The severity of the ‘climate change’ problem is particularly high in countries which already have unfavourable weather/soil conditions. For instance, nearly 1 out of every 3 people in Guatemala suffers from food insecurity, brought about by the uncertainties of agriculture. In Mato Grosso, an apparently minor 1 ° centigrade increase in temperature can bring down annual corn and soy yields by up to 13%. A University of Leeds report has predicted that, farms in temperate and tropical areas will start to be affected (in the form of lowered yields) from 2030, due to 2 ° increases in temperature levels. Making the necessary adjustments/technology integrations to adapt to CC would require hefty investments by the developing/underdeveloped nations, to the tune of $200-$300/year (as estimated by the UNEP). The problem is big, and coping with it is a major challenge.

  2. The concept of climate-smart agriculture

    Climate change adversely affects both the quality and quantity of agricultural yields. That, in turn, causes farmers to fall into the trap of food insecurity, and consequent malnourishment and poor quality of life. The prime objective of climate-smart agriculture (CSA) is satisfactorily solving this problem, and delivering food security to everyone concerned. To attain this goal, CSA places prime focus on three factors: i) increases in farm outputs (productivity enhancement), ii) reduction in greenhouse gas emissions, to stall global warming (mitigation enhancement), and iii) boosting the resilience of crops/farms, in the face of climatic vagaries (adaptation enhancement). Interestingly, there are tradeoffs involved among these three factors (often referred to as the ‘3 pillars of CSA’). The challenge lies in integrating climatic elements in the overall agricultural plans, and optimizing the different targets by handling the tradeoff between these 3 factors in the best possible manner.

  3. The importance of geomapping in CSA

    Climate-smart agriculture has emerged as a key element of sophisticated agritech standards in general, and the application IoT and sensors in particular. The usage of smart sensors for geomapping (showcasing differences in climate conditions/soil conditions (like temperature, humidity, terrain quality, soil pH value, etc.) across locations by marking them in different colours on a map) is a classic example of this. These farm sensors can be designed to capture real-time data from weather satellites and/or other third-party elements, and send them back to a centralized gateway for detailed analysis. To ensure accurate geomapping and optimal performance of agro-sensors, the cellular network coverage has to be strong (and reliable) enough. Generally, the presence of many tall trees on a farmland can interrupt signals, and hence, cause the sensors to malfunction.

  4. Cost-benefit studies in CSA

    Full-scale integration of climate-smart practices involves moderate to heavy expenses – in the form of new tools and gadgets, as well as the need to learn how to optimally use the technological resources. Provided that CSA practices has been implemented properly, the benefits can also be huge – mainly because uncertainties caused by ‘climate change’ will then be out of the picture. In-depth economic analysis is required for this cost-benefit analysis, and to calculate the estimated potential gains from CSA, the net present value (NPV) and internal rate of return (IRR) figures are often referred to. The discount rate for making these calculations is pre-specified (~12%) – representing the money’s social opportunity cost. A viable statistical model has to be created to track ‘crop response’ levels after applying CSA practices on the field. On the cost side, both the one-time installation expenses as well as the flow of maintenance expenses have to be taken into account. The economic feasibility of CSA has already become evident in several locations worldwide. At the Trifinio reserve, for example, the IRR of CSA practices has been in excess of 140%. The results have been even more favourable in Nicaragua, where the cost-benefit ratio has jumped to 1.85 and the IRR rate has jumped to a shade under 180%. Users in Ethiopia have also reported much lower yield variability and ~22% higher outputs as a result of implementing CSA practices.

Note: The IRR rates in Trifinio and Nicaragua were calculated on the basis of vine crops in home gardens. The cost-benefit figure in Nicaragua is with respect to application of practices on basic grains.

  1. The need to reduce GHG emissions

    More than 40% of the total emission of greenhouse gases come from agriculture. To ensure the sustainability of farming activities and full food security, reduction of the emission levels is essential. Farmers need to focus squarely on bringing down GHG emissions per unit of produce (kilogram, calorie, etc.), while activities like deforestation have to be done away with as much as possible. Another key requirement in this regard is the management of trees and soil surfaces, so that the latter can serve as reliable ‘carbon sinks’. The livestock sector – which accounts for nearly 15% of the total man-made GHG emissions – has to be examined closely, along with existing rice cultivation techniques. In rice/paddy fields, overwatering (and consequent flooding) is one of the principal causes for rising methane emissions. Hence, lowering the frequency of irrigation and allowing the fields to drain properly are some basic strategies to reduce this methane production level. In general, the heavy use of machines and fertilizers in intensive farming often results in greater release of toxic GHG gases into the environment. One of the biggest sub-domains under CSA is the ‘mitigation’ of such emissions. A ‘greener’ environment will be key to sustainable agriculture.

  2. CSA practices

    We have already highlighted how implementation of CSA practices has benefitted farms in several places. Let us here take a look through the most popular ‘CSA practices’ (awareness about CSA was close to 75% by 2014). Using mulch for conservation tillage, with 67% frequency of implementation, is by far the most highly adopted ‘CSA practice’, with agroforestry with hedgerows and crop rotation activities taking up the second and third spots. Other relatively commonly implemented CSA practices include drip irrigation, contour ditch setup, putting up stone barriers, and switching over to heat/pest resistant crop varieties (maize, beans, etc.). The average increase in yields due to application of these practices hovers between the 25% and 40% mark, with conservation tillage and drip irrigation offering the maximum gains. CSA practices are expected to become more refined in future – and the advantages of using them would be even more significant.

  3. Greater adaptability is a key requirement

    The global population is rising rapidly, and agricultural outputs have to keep pace with it. Put in another way, we have to produce enough food to feed the rapidly burgeoning population levels (estimated to reach 9.6 billion by 2050). A ~70% spike in food production is required between now and 2050 – and this growth has to take place with ‘sustainable intensification’ (with minimal negative impacts on the environment, and with no adverse effects on production capabilities in future). The importance of making agriculture more ‘resilient’ and adapted to ‘climate change’ is paramount – and that involves the implementation of ‘smart farming’ standards, with advanced, internet-enabled tools and gadgets. Right from optimizing irrigation sessions, to monitoring soil quality/temperature/moisture and weather-related information – everything can be tracked with the help of sensors, examined carefully, and future courses of actions are determined on the basis of such analyses. Over the last couple of years or so, artificial intelligence (AI) and M2M (machine-to-machine) learning have emerged as vital cogs for optimized precision agriculture. For managing sensors, cellular modems/gateways/controllers are used.

Note: For a detailed analysis of smart irrigation tools and practices, read this post.

  1. Challenges to overcome

    CSA promises to offer food security and development by increasing agricultural produce and making the sector more sustainable than ever before. However, there are certain bottlenecks that impede the widespread application of CSA practices. For starters, since the gains from moving over to climate-smart farming do not usually become apparent right from the start, many farmowners remain sceptical about the return-on-investment (ROI) factor. In the developing countries, getting farmers acquainted with the necessary technology (computer intelligence and robotics, for example) also remains a considerable challenge. CSA is, by nature, data-driven – and conflicts of interest regarding data-ownership can easily crop up. Also, the low-margin nature of the agricultural sector acts as a barrier to climate-smart agriculture. Many growers view the innovations involved in CSA as ‘risky’ – and hence, remain averse to making investments on the new farming technologies. Thankfully, CSA projects around the world are being backed by public funding – and we should be able to move beyond most of these challenges soon enough.

  2. Emphasis on ‘ecosystems services’

    While modernization of agriculture has picked up pace over the past few quarters, the developments have been mostly fragmented – thanks to sectoral approaches taken by the growers. CSA looks to make things more efficient, by making agricultural advancements holistic, with prime focus on integrated plans and management. Under climate-smart agriculture, the importance of the ‘free ecosystems services’ (soil, air, water, etc.) is factored in – and due care is taken to avoid depletion/damage of these resources in any way. As a rule of thumb, CSA practices should focus on bringing about higher outputs, without affecting the quality/availability of these ‘ecosystems services’. Typically, CSA-proponents highlight the need to understand the various interdependencies among resources (soil, water, air, forests, biodiversity management), and follow a ‘landscape approach’ for improving the output levels and making farms more climate-resilient and adaptable. It also has to be kept in mind that CSA is not a ‘one-size-fits-all’, or even a ‘one-size-for-every-time’ solution. Since several related objectives have to be met, the interactions of elements with the overall landscape and ecosystem layers have to be taken into account. A CSA practice that is mighty effective for Farm A can be absolutely useless for Farm B, due to the differences in the ecosystems of the two fields.

  3. CSA and organic farming

    Organic farming and climate-smart agriculture differ primarily due to their approaches. In the former, the ‘methods’ of agriculture are specified (avoiding harsh chemical fertilizers and pesticides), while in the latter, the focus is more on the ‘goals of farming’ (namely, food security via higher yields, lower emissions, greater adaptability and sustainability). Interestingly, there are many practices involved in organic farming that are simultaneously ‘climate-smart’ as well. An example in this regard would be the emphasis on boosting organic matter in soil and improvements in natural nutrient cycling in organic farming – activities that help in carbon-preservation in the soil, and make agriculture as a whole more ‘resilient’. Proper nutrition and diet sustainability are two other factors that come under the purview of climate-smart agriculture. Organic farming is closely related to CSA – and if a comparison has to be made between the two, it’s CSA that has the more extensive benefits.

  4. CSA in practice

    There are already many instances of successful implementation of CSA practices, in different parts of the world. In Kenya, Uganda and Rwanda, dairy production has been intensified with the help of climate-search packages of practices (PoP) – with the benefits percolating to over 200 thousand farmers. The ASI rice thresher in Africa offers heavy economic advantages (easily outweighing its installation costs) – and prevents wastage of rice harvests. In Brazil, the ABC credit-initiative plan is geared to provide loans at low interests to farmers involved in low-carbon farming and other activities related to sustainability. Catfish aquaculture in Vietnam has received a serious thrust, while food-security in Africa has received a shot in the arm with the help of the ‘drought-tolerant maize for Africa’ (DTMA) project. Carbon credits were handed out to poor Kenyan farmers, in a bid to improve their land-management capabilities and standards. It is pretty much evident by these use cases that CSA has multiple points of entry – at different levels, and with varying specific goals.

It would be a folly to view climate-smart agriculture as a rigid set of technological gadgets and practices (although it can involve the application of IoT and robotics in a big way). The essence of CSA lies in seamlessly integrating solutions at the value chain, the food system, the ecosystem & landscape, and even at the policy/decision-making stages. Lowering the gender-gap and empowering women (along with other marginalized groups) is another important benefit of CSA. It has been seen that there is significant involvement of women (~43%) in agricultural activities, although their actual land-ownership figures are much lower. With ‘climate smart practices’, attempts are being made to resolve this problem, and provide everyone with equal opportunities. Coping with ‘climate changes’ effectively is now possible, thanks to the growing popularity of CSA. These practices are ideal for making agriculture more sustainable than ever before.