Monthly Archives: July 2017

Smart Irrigation With IoT: Top 12 Things To Know

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)

Benefits and key features of smart irrigation

The world has close to 7.53 billion people at present. A recent study found that, on average, 33% of the global population suffers from water scarcity in some form or the other. By 2030, this figure is likely to rise to 50% – clearly underlining the alarming rate at which the problem of water deficiency is expanding. Interestingly, ~70% of the total volume of water withdrawals in the world are used for irrigation, and that’s precisely where most of the water-wastage happens. Around 60% of the water meant to be used for irrigation is lost either due to evapotranspiration, or land runoff, or simply inefficient, primitive usage modes. This, in turn, brings to light the importance of smart irrigation – powered by the internet of things (IoT) – that can go a long way in managing the rising levels of water stress worldwide. In what follows, we will put the spotlight on some interesting facts about smart irrigation:

  1. The need for automated irrigation

    Smart irrigation is a key component of precision agriculture. It helps farmers avoid water wastage and improve the quality of crop growth in their fields by: a) irrigating at the correct times, b) minimizing runoffs and other wastages, and c) determining the soil moisture levels accurately (thereby, finding the irrigation requirements at any place). Replacing manual irrigation with automatic valves and systems also does away with the human error element (e.g. forgetting to turn off a valve after watering the field), and is instrumental in saving energy, time as well as resources. The installation and configuration of smart irrigation systems is, in general, fairly straightforward too – which helps the average user.

  2. The IoT-based irrigation system architecture

    A smart microcontroller (which serves as the ‘information gateway’) lies at the heart of the automated irrigation infrastructure. Soil moisture sensors and temperature sensors, which are placed on the fields, send data on a real-time basis to the microcontroller. Generally, a ‘moisture/temperature range’ is specified – and whenever the actual values are out of this range, the microcontroller automatically switches on the water pump (mounted on it with output pins). The microcontroller also has servo motors, to make sure that the pipes are actually watering the fields uniformly (no area gets clogged; no area is left too dry). The entire system can be managed by the end-user (farmer) through a dedicated mobile application. Smart irrigation makes it possible for growers to monitor and irrigate their fields remotely, without any hassles.

  3. The use of internet

    The flow of information to and from the centralized gateway (here, the microcontroller) has to be supported by stable internet services. Wireless low-power networks (e,g., LoRaWAN or Sigfox) can easily be used to power the sensors. These sensors send field information to the local computer of the user, or to a cloud network (as required). Over there, the system can combine the information with other inputs from third-party services (say, the local weather channel) to arrive at ‘intelligent irrigation decisions’. For example, if rain is in the forecast, water will not be released – even if the real-time data suggests that the field needs irrigation. Recalculations are done at regular intervals.

Note: Smart irrigation systems can save up to 45% water during the dry season, and around 80% water in the rainy season, compared to manually operated watering systems.

  1. The cost advantages

    In an automated irrigation infrastructure, there are no rooms for resource (read: water) wastage. As a result, there are cost benefits to be gained as well – by replacing the traditional watering system with a fully self-operating one. Chances of crops dying due to excessive (or insufficient) watering are minimal, which means that farmers will not have to worry about frequent plant replacement. Also, since smart agriculture in general, and smart irrigation in particular, is all about faster, healthier crop growth – the average crop cycle is shortened, and there are every chances of annual yields being higher. IoT-powered irrigation tools can be used in lawns, gardens and landscapes too.

  2. Types of sensors used

    Several types of sensors are used to parlay data to the irrigation multicontroller unit – each dedicated to capture and transmit specific data. The first are the soil moisture sensors (or SMS), which examine the dielectric constant of soil surfaces to estimate the volumetric water content in the surface (this moisture level is directly proportional to the dielectric constant reading). SMS controllers can either be ‘on-demand’ (with the capability of initiating and terminating irrigation sessions) or ‘bypass’ (with the capability to allow irrigation sessions (or bypass them), within pre-specified threshold levels). Next up are the temperature sensors, which typically use advanced Resistance Temperature Detector components (RTDs) to track soil temperature levels accurately. The ‘relay’ systems are made responsible for turning on or turning off the pump(s), as per the precise soil requirements at any time.

Note: Soil moisture sensors offer much more efficient on-field irrigation than traditional, timer-based sprinkler systems. There are no risks of overspraying or overwatering with the former.

  1. Incorporating the climatic parameters

    While there are many merits of the smart soil moisture sensors – they do not factor in weather-related factors in any way, and that remains a limitation. Significant amounts of moisture is lost due to evapotranspiration (ET; the total water lost from the plant leaves via transpiration, AND the soil via evaporation). Hence, crop-growers should ideally think beyond SMS controllers, and start using the ‘smarter’ evapotranspiration controllers or weather-based irrigation controllers (WBICs). These work with high-quality weather sensors – which receive real-time weather updates, and use the same for customizing the irrigation events. WBICs can also work with historical weather information and/or data received from satellites. Other unique characteristics of a particular crop field, right from types of plants and nature of the soil, to the ground slope and the amount of sunlight available, are taken into account – for determining the exact amount of watering a place needs at any point in time.

  2. The role of LED lights

    A smart irrigation unit, with microcontroller(s) at its core, also has pre-tested LED bulbs. When the on-field sensors report that the moisture level is has fallen below the recommended/threshold level – the bulb glows, indicating that an irrigation event has to be initiated (i.e., the sprinkler valves have to be turned on). LED lights are also an important part of ‘tank overflow control models’, which work with powerful ultrasonic sensors. As long as the pump motor is running and the water level in the tank is beneath the threshold level – the bulbs glow. In essence, the LED lights serve as handy tools to indicate the status of the pumps and sprinklers at any time. Readings from the SMS-es or the ultrasonic tank sensors can be displayed on a mobile app, for the convenience for farmers.

Note: Users can see the water level in a tank, or the soil moisture levels, on LCD screens.

  1. The placement of sensors

    It’s all very well to set up gateways and pumps and other tools, but unless the sensors are placed correctly in the fields – the ‘decisions’ taken by the smart irrigation network can very well be erroneous. Experts recommend users to make sure that the sensors remain in contact with the soil surface at all times (ruling out the presence of any ‘air gaps’), and are placed a minimum of 5 ft. away from irrigation heads, property lines, homes, and high-traffic zones. For best results, the sensors should be strategically placed in the area(s) that receive the maximum sunlight, and within the root zones of the plants (at a depth of ~3”). A soil moisture sensor has to be covered with soil, but the surrounding pressure should not be too high.

  2. The rise of smarter sprinklers

    One of the biggest advantages of switching over to a smart irrigation regime is the considerable volume of water savings. These savings can be increased even more (by around 20%), by ditching the outdated sprinkler systems, and using nozzles that can spray rotating water streams in multiple trajectories instead. The ‘smarter sprinklers’ go a long way in ensuring uniform distribution of water to all parts of the field (or a section of it), and offers much greater resistance to changes in weather conditions (wind speed, mist, etc.). The water released by these rotating-head sprinklers is mostly soaked in by the soil, thereby minimizing runoffs and other forms of wastage.

Note: Rain sensors have also already found widespread acceptance among crop-growers in different countries. These sensors double up as ‘shutdown devices’, sending signals to stop automated sprinklers at the time of (and just afterwards) heavy rainfalls.

    10. More prompt fault detection and repair

Small leaks and cracks in traditional irrigation systems (in tanks, reservoirs, etc.) can lead to considerable water loss – adding to the already mounting global water crisis. What’s more, manually detecting the source of these problems is often difficult, and a potentially time-consuming affair. Installing smart irrigation tools is a great way to keep such problems at an arm’s length. With IoT-support, these controllers can detect existing problems in any irrigation unit real-time – which, in turn, makes it easy for users to do the necessary repairs immediately. In essence, an internet-enabled irrigation system can ‘supervise’ the condition of the tanks and pumps and other units – without the user having to stay in front of a computer at all times.

    11. The cost factor

While some investment is required to implement smart irrigation solutions on a field, the sensor costs are far from being exorbitant. On average, the price of a soil moisture sensor lies in the $150-$160 range, while that of the more advanced WBICs is around $300. The rotating sprinklers (which, incidentally, are ideal for irrigating slopes) are priced on a per-unit basis (around $6 or $7). Large manufacturers also offer special rebates on the sensors and sprinkler units. Given the potential benefits of upgrading to a smart plant-watering system – the cost figures are relatively reasonable.

Note: SoCal WaterSmart is one of the leading manufacturers of irrigation controller systems. For crop-growers with minimum technical expertise, an IoT irrigation device like CropX (which reduces water wastage and helps in increasing yields) is ideal.

    12. The challenges

The adoption of IoT in agriculture has gone up immensely in recent times – but even so, the concept of ‘smart irrigation’ remains a relatively new one. Most of the existing smart irrigation controllers have many complex features and capabilities – which, while perfectly suited for large-scale commercial usage (e.g., on a golf course), are way too elaborate for small farmowners and individual gardeners. The need of the hour is to raise the awareness about, and the familiarity with, these smart irrigation systems among people – particularly since user-inputs (type of crops, soil, surface slope, etc.) are critical for the performance of these systems. Also, it has to be kept in mind that the room for error in a ‘smart system’ is much lower than in a traditional set-up. A mechanical failure or a network snag can have serious consequences.

There are plenty of things to be said in favour of smart irrigation setups. For starters, they help in optimal utilization of water – ensuring uniform watering of plants, at the right times, and in the right amounts. With the help of high-end sensors, they can also factor in climatic parameters, to make the irrigation routine more efficient. Significant savings are to be had, both in terms of much lower water wastages, as well as the diminished need for manual labour. With intelligent ‘irrigation decision-making’ capacities, advanced IoT-supported smart irrigation controllers are changing the face of agriculture. The field is evolving rapidly, and it will be interesting to track further developments in this domain over the foreseeable future.


Soil-less Agriculture: An Overview Of Hydroponic Farming

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)


An analysis of hydroponic farming


For proper growth, crops require a reliable medium that would be responsible for capturing and storing the essential plant nutrients. In traditional agriculture, this role is performed by soil. However, the rapid emergence of hydroponic farming over the last couple of years or so has somewhat diminished the importance of soil in agriculture – with ‘soil-less farming’ becoming a very real possibility. A recent study revealed that the value of the global hydroponics industry will be well over $395 billion by the end of this decade (growing at a CAGR of ~6.8%). Over here, we will present some interesting facts, features and characteristics of hydroponic farming:

  1. What is hydroponics all about?

    In essence, hydroponic farming is all about growing plants and crops without soil. In this method, plant roots are brought directly in contact with liquid (generally, plain water) nutrient solutions – ensuring healthy growth. The nutrients are either reused or drained off, as required. Since there is no soil involved, the development of large root systems (to draw in nutrients) is not required – and typically, the intake of nutrients by the fibrous roots of hydroponically-grown crops is very efficient (minimal wastage). In hydroponics, soil is replaced by a reservoir or a medium made of a different material – that absorbs the necessary nutrients from the water-based solution.

Note: Soilless agriculture is not, per se, a particularly innovative concept. Reports suggest that farm research experts from the 18th century were well aware of it. Dr. William Frederick Gericke is credited for coining the word ‘hydroponics’ in 1936.

  1. What materials can be used as ‘growing medium’ in hydroponics’?

    Several different materials can be used to create the nutrient-absorbing medium (in essence, the substitute of soil). Depending on the precise requirements of crops and, of course, the farmer – materials like sand, hydrocorn, expanded shale and coco peat are used for the purpose. Clay pellets, vermiculite and rockwool are often opted for by hydroponic farmers as well. To be usable as the medium for hydroponic plant growth, the material has to be inert – and ensure that the crops have ready access to the liquid nutrient solution, light, oxygen and other essential enzymes (mixed with the nutrient solution).

Note: The common characteristic of all the hydroponic growing mediums is their ‘inertness’ – their incapability of being able to support the growth of plants on their own, in the absence of additional nutrients. The medium is only responsible for supporting the weight of crops, and to facilitate the passage of oxygen/nutrients.

  1. Do plants grow faster in hydroponic farming?

    There are very little doubts regarding that. On average, the growth rate of a plant is close to 40% higher in an hydroponic setup – compared to the traditional soil-based farming method. The annual crop yields can be as much as 75% more – making hydroponics a great technique for large-scale, commercial crop-growers (in particular). The main reason for the shorter crop cycles and much quicker time-to-harvest in hydroponic farming is the direct contact of advanced, high-quality nutrients with the plant roots. It’s like providing the best food directly to plants – ensuring significantly faster growth of the latter. Unlike soil farming, there is no wastage of nutrients, and plant growth happens in a controlled, efficient environment. Since hydroponic farming is considerably less labour-intensive than soil farming, availability of manual resources is not much of an issue either.

Note: There is no soil in hydroponic gardens, and hence, there is no need for spraying pesticides and strong chemicals – which can potentially have adverse side-effects on the crops.

  1. Types of hydroponic systems

    There are several alternative hydroponic system setups that farmers can opt for – depending on the exact requirements of the plants/crops they wish to grow.

  • The ‘Water Culture’ (also known as ‘Deep Water Culture’) is probably the simplest system, involving careful suspension of the plant roots in the water-based nutrient solution. Growers have to ensure that light does not get direct entry in the system (failing which, there can be significant algal growth), and air pumps are used to provide oxygen supply to the solution (and hence, to the roots). In this method, the plants are put in pots supported by polystyrene ‘floater’ boards. The tank, which contains the nutrient solution, is drained at regular intervals.
  • The ‘Nutrient Film Technique’ (or, NFT) can be applied to ensure optimal utilization of nutrients/resources, and superior-quality plant growth. The nutrient solution regularly passes over the tip of the roots (the solution has to be kept at a slight tilt, to facilitate smooth runoff) – and the plant gets the required oxygen both from the solution as well as from air. The solution moves from the tank to the growing medium (usually, rockwool is used in this method) through a tube – and creates a film/layer of nutrients on the medium (that’s how the method gets its name). The used solution can either be recirculated, or drained out (run-to-waste NFT).
  • The principle behind the ‘Flood and Drain’ hydroponic system is also simple enough. The growing medium is ‘flooded’ with the nutrient solution at certain time-intervals. A timer is set up in the system, to repeat the ‘flooding’ process. As the solution keeps flowing across the medium, the latter absorbs important nutrients – and that, in turn, supports the growth of plants. Typically, crops that can withstand small periods of dryness are grown by this method (also known as ‘ebb and flow’ system). A point of concern in this system is the risk of a missed alert from the timer – which can lead to excessive dryness and plant suffocation.
  • The ‘Dripper’ system has similarities with the ‘Flood and Drain’ method, particularly since this one also requires a pump (for transferring the nutrient solution) and a timer. However, in the ‘drip’ system, the solution is actually dripped on to the roots of the plants and the growing medium. Hydrocorn, clay pebbles and rockwool – which drain slowly –  are the best mediums to be used in this system. Once again, the nutrient solution can either be reused or drained off. A potential downside of ‘dripper’ systems is the chance of the drip tubes/drippers getting clogged due to the formation of nutrient particles (the problem is more common when organic nutrients are used).
  • The ‘Wicking’ method of hydroponic agriculture is also popular. In this system, vermiculite or perlite mediums are generally preferred – and farmers have to either connect the plant roots with the nutrient solution through a wick, or plunging the lower portion of the medium directly in the solution (nutrients get directly wicked to the roots). Mediums that have high absorption capacities (e.g., rockwool) are not used, since they can cause suffocation of the plants (due to excess amounts of nutrients absorbed).

        5. The Aeroponics system

Although ‘aeroponics’ is another system of hydroponic farming, its technical differences from the others merit a separate mention. In this setup, the plant roots are kept suspended in the air, and the nutrient solution is sprayed/misted on them. A pump is used to automate the misting activity after every few seconds (a timer is used in the system as well). Like the ‘Flood and Drain’ method, ‘aeroponics’ also relies heavily on air as an important source of nutrients. A pond fogger or a fine spray nozzle is used for misting the roots with the solution.

Note: AeroGarden is a classic example of commercial application of the aeroponics growing method.

  1. How does hydroponic farming do away with uncertainties?

    In a soilless agriculture setup, there are none of the uncertainties that are typically associated with traditional farming methods (soil fertility, presence of soil organisms and pests, etc). Farmers get the opportunity of forming a preset ‘nutrient regimen’ – with complete control over the nutrients (volume and quality), pH levels (the 5.8-6.8 range is considered to be ideal) and oxygen availability. Problems, if any, can be easily detected and got rid off, and the entire hydroponic system can be replicated without any hassles. Enhanced reliability is a big factor working in favour of hydroponic farming.

Note: The ‘nutrient regimen’ should primarily have six ‘macro nutrients’, along with smaller amounts of the ‘micro nutrients’. Farmers also often mix the elements of two or more hydroponic systems to create ‘hybrid systems’.

  1. Does hydroponic farming help in water conservation?

    Yes, and in a big way. In traditional soil farming, significant amounts of water gets evaporated – resulting in a wastage (both of the water as well as the nutrients present in it). Since hydroponics does not involve dirt in any way, there are no chances of evaporation or unnecessary drainage – and the water-based nutrients can easily be recycled. Experts have reported that the total volume of water required to irrigate hydroponic gardens is about one-tenth of the amount required in soil-based ecosystems. This makes the method highly suitable for growing plants in relatively arid regions (countries in the Middle East, for instance). As a rule of thumb, fresh water is used for soilless farming, and growers have to allow some time (a day, ideally) for chlorine and other chemicals in the water to be removed. After that, nutrients can be mixed to the ‘clean’ water, to create the ‘nutrient solution’. Rainwater is treated as the best possible source of water for hydroponics, while the filtered water made available through reverse osmosis is also good. Using heavily chlorinated water or hard water is an absolute no-no.

Note: The electrical conductivity (EC) level of water used for hydroponic farming should ideally be around 10.

  1. What are the best conditions for hydroponic farming?

    As already mentioned earlier, hydroponics is free from the vagaries of soil qualities, while properly prepped freshwater (chlorine, chemical removed) is best-suited for this plant-growing method. The minimum level of dissolved oxygen (DO) in air, which is an important nutrient source here, is 6 ppm (parts-per-million). For typically ‘cool season crops’, the temperature range of 10°C – 21°C is optimal, while ‘warm season plants’ grow best when the temperature is between 15°C and 27°C. Plants in a hydroponic garden also require at least 8 hours of sunlight on a daily basis (in the absence of proper sunlight, farmers can use high-intensity sodium lamps). The water should be drained once every week (plant growth and yield can be affected by contaminated water), the entire system should be leached/flushed just before harvest. In certain hydroponic systems (Flood and Drain, NFT), adjusting the pH levels regularly is also important.

Note: Apart from the removal of chlorine, calibration for pH also becomes easier when the collected freshwater is allowed to rest for a day or two.

  1. How does hydroponics help in better resource utilization and managing pollution?

    In the same space, the number of plants that can be grown by hydroponics is nearly four times more than what is possible with soil-based agriculture. The plants can be put in small pots or containers, put on a countertop – and connected to the nutrient solution tank below (the pots can be suspended in the water-based solution as well). In the traditional method, the minimum area requirement for growing plants would be a five-gallon (probably more) bucket. This opens up the possibility of more harvests and much higher yields from hydroponic farming (with the considerably faster plant growth also contributing to this). Soil farming can also pose serious environmental challenges, with water bodies being polluted by soil nutrients not used up by crops, and probable accumulation of salt in the underground water (salination). The runoff of chemical nutrients to lakes/rivers can lead to deoxygenation – putting the life of water animals in threat. Use of pesticides brings in risks of air pollution. In hydroponics, there is no soil, and no such potential environmental hazards. It’s a ‘green’ method!

Note: There are many areas with harsh climatic conditions, where soil maintenance becomes a huge concern for farmers. Hydroponic farming is a great option in such places. Weeding is yet another task that hydroponic farmers need not worry about.

     10. Are there any associated risks or challenges?

Hydroponics is a simple method of alternative farming (the most technical thing about it is probably its name!) – and there are not much in the way of risks in this system. However, growers have to make sure that the plants can access the nutrient solution at all times – otherwise, the roots can become too dry very quickly. In the ‘dripper’ or the ‘Food and Drain’ systems, special care has to be taken about the reliability of the alarms. Any malfunction in the latter can cause serious damages to the plant. In general, while the automated nature of plant feeding and growth has a host of advantages (higher yields, faster growth, better quality, optimal nutrient use, etc.) – but things can quickly go bad in the event of a lengthy mechanical failure. Hydroponic plants tend to be smaller in size (and with smaller, less complex roots) than plants grown in traditional soil fields.

Note: Hydroponics can be applied for both indoor and outdoor farming, including plant growing in greenhouses. The method is best suited for plants that have shallow root systems. A wide range of fruits, houseplants and veggies, right from spinach and herbs, to lettuce and radish, can be grown with hydroponic farming.

While hydroponics might seem to be a variant of organic farming at first, the two are actually entirely different methods. The former does not have any role for soil, while organic farming requires the conversion of nutrients by the soil (so that they can be absorbed by plant roots). In terms of nutritional values and ecological benefits too, hydroponics offer much greater benefits than conventional soil-based farming. The systems are easy to set up, making DIY hydroponics relatively simple too. For professional crop-growers as well as general gardening enthusiasts, it is now possible to grow healthy plants…without having to get their hands dirty!

[Infographic] Small Businesses HAVE To Build Mobile Apps..And Here’s Why

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)

(This GUEST POST has been contributed by Colin Cieloha, North American Territory Manager at

Need for mobile apps for small businesses

ARKit and Core ML: An Overview Of The New Apple Frameworks

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)

At this year’s Worldwide Developers Conference (WWDC; 5-9 June), Apple announced two new frameworks – an augmented reality (AR) developer kit named ARKit, and a machine learning API called Core ML. The frameworks will be among the key features of iOS 11 (the third beta was released earlier this month) – the latest version of the company’s mobile platform. In today’s discussion, we will give you a brief idea about ARKit first, and move on to Core ML next:


“Augmented reality is going to help us mix the digital and physical in new ways.”

— Mark Zuckerberg, Facebook F8 Conference

Over the years, there has hardly been any activity from Apple in the virtual reality (VR) and augmented reality (AR) realms. As major rivals like Amazon (with Alexa), Microsoft (with HoloLens) and Google (with Project Tango) have upped their respective games, all that we got from Apple in the form of AR tools were Siri and iOS 10’s ‘intelligent’ Quicktype. The scene has changed with the arrival of ARKit, which has been billed as the ‘largest AR platform in the world’ by Apple.

  1. More power to developers and apps

    For third-party mobile app developers working on the iOS platform, ARKit brings in never-before capabilities to blend in AR experiences within their applications. With the help of the framework resources, motion sensor, and of course the camera of the iPhone/iPad, devs will be able to make their software seamlessly interact with the actual environment (read: digital tools will enrichen the real world). The role of AR in Pokemon Go was only the tip of the iceberg (many users even reported that the gameplay got enhanced when AR was turned off) – and ARKit will help developers go all in to integrate augmented reality in their apps, to make the latter unique, useful and more popular than ever before.

  2. The fight with Google and Facebook

    Apple is late to the AR game, there are no two ways about that. For ARKit to be able to make a mark, it has to offer something more than the AR-based solutions of Facebook and Google, which are both established players in this domain. Interestingly, Apple’s new framework DOES seem to have a key advantage: it is compatible with all existing iDevices running on the A9 or A10 chip, while for integrating Project Tango, Android OEMs have to create separate, customized hardware. Also, Facebook’s AR activity is, at least till now, confined to its own Camera app only. ARKit, on the other hand, will be pushed out to all iPhone/iPad applications. In terms of existing user-base, Apple certainly has a stronghold.

  3. How does ARKit work?

    The ARKit framework does not form three-dimensional models to deliver high-end AR-experiences to app-users. Instead, it uses a revolutionary technology called Visual Inertial Odometry (or, VIO) – which has the capability to combine information from the CoreMotion sensor and the device camera, to track the movement of the smartphone/tablet in a room. Put in another way, a set of points are traced in the environment by ARKit – and these points are tracked as the device is moved. This functionality is expected to help developers create customized virtual world experiences over the real environment, with their new apps (the superior processing speeds of A9/A10 chips is also an important factor). ARKit does not need any external calibration either, and should typically generate highly accurate data.

Note: The process in which ARKit integrates virtual elements into the real world with the help of projected geometry is known as ‘world tracking’.

      4. The role of dual cameras

Apple’s decision to do away with the headphone jack in iPhone 7 raised quite a few eyebrows. There has been considerable curiosity about the presence of dual cameras in the handset. The announcement of ARKit fully justifies the latter decision though. With the help of the dual cameras, the capability to correctly gauge the distance between two viewpoints (from the device’s current location) become easier, and triangulation of this distance is also possible. The two cameras, working together, offers improved depth sensing, and obviously, better zooming features as well. This, in turn, helps the handset in creating pinpoint-accurate depth maps, and differentiate between background objects and foreground objects.

     5. Finding planes and estimating lights

Floors and tables and other basic forms of horizontal planes can be detected by the ARKit framework. After the detection, the device (iPhone/iPad) can be used to put virtual objects on the tracked surface/plane. The plane detection (scene understanding) is done by devices with the help of the ‘scenes’ generated by the built-in camera. What’s more, the framework can also determine the availability of light in different scenes, and ensure that the virtual objects have just the right amount of lighting to appear natural in any particular scene. From tracking the perspective and scale of viewpoints, to shadow correction and performing hit-tests of digitized objects – the ‘world tracking’ functionality of ARKit can do them all.

Note: Once the scene understanding and estimation of lighting is done, virtual elements can actually be rendered to the real environment.

     6. The limitations of ARKit

ARKit is Apple’s first native foray into the world of VR/AR, and the tech giant is clearly planning to take small steps at a time. As things stand at present, the framework lacks many of the convenient features of Google’s Project Tango – from the capability of capturing wide-angle scenes with the help of the additional cameras, to full room scanning and create 3D room models without requiring external peripherals (needed in iOS). The framework is not likely to have in-built web search capabilities (as Facebook’s and Google’s AR solutions have) either. What ARKit is expected to do (and do well) is motivating developers to come up with new app ideas with AR as their USP. It does not place any extra pressure on the device CPU, and also offers high-end object scalability. The Apple App Store has more than 2.2 million apps – and if a significant percentage of them have AR features (e.g, the option of activating AR mode), that will be instrumental in helping the technology take off in a big way.

In 2013, Apple coughed up around $345 million to acquire PrimeSense, the 3D sensor company that worked for Microsoft’s Kinect sensor. A couple of years later, the Cupertino company swooped in once again, with the acquisition of Linx (a smart camera motion manufacturer) for $20 million – and Metaio (an AR startup). ARKit might be the first significant augmented reality tool from Apple, but the company has been clearing its way for it from a long time. The arrival of this framework is big news, and it can revolutionize the interactions of iDevice owners with their mobile apps.


Core ML

The global artificial intelligence (AI) market has been estimated to touch $48 billion by the end of this decade, growing at a CAGR (2015-2020) of more than 52%. Once again, Apple was relatively quiet on the AI and machine learning (ML) front (apart from the regular improvements in Siri). Rivals like IBM, Google, Facebook and Amazon are already firmly entrenched in this sector, and it will be interesting to see whether Core ML on iOS 11 can put Apple at a position of strength here.

     1. What exactly is Core ML?

Core ML has been created as a foundational framework for creating optimized machine learning services across the board for Apple products. The implication of its arrival is immense for app-makers, who can now blend in superior AI and ML modules in their software. The manual coding required for using Core ML is minimal, and the framework offers in-depth deep learning for more than 30 different layer formats. With the help of Core ML, devs will be able to add custom machine learning capabilities in their upcoming apps for the iOS, tvOS, watchOS and macOS platforms.

   2. From NLP to machine learning

Way back in 2012, natural language processing (NLP) debuted on iOS 5 through NSLinguisticTagger. iOS 8 brought in Metal, a tool that accessed the graphical processing units (GPUs) to deliver enhanced, immersive gaming experiences. In 2016, the Accelerate framework (for processing signals and images) received something new – the Basic Neural Networks for Subroutines (or, BNNS). Since the Core ML framework is designed on top of both Accelerate and Metal, the need to transfer data to a centralized server is eliminated. The framework can function entirely within a device, boosting the security of user-data.

Note: The iPhone 8 might well have a new AI chip. If that happens, it would be perfectly in line with Apple’s attempts to create a space for itself in the machine learning market.

    3. How does Core ML work?

The operations of the Core ML framework can broadly be divided in two stages. In the first stage, machine learning algorithms are applied to available sets of training data (for better results, the size of the training dataset has to be large) – for the creation of a ‘trained model’. The next stage involves the conversion of this ‘trained model’ to a file in a .mlmodel format (i.e., a Core ML Model). High-level AI and ML features can be integrated in iOS applications with the help of this Core ML Model file. The function flow of the new machine learning API can be summarized as: creating ‘trained models’ → transforming them into Core ML models → using them to make ‘intelligent’ predictions.

The Core ML Model contains class labels and all inputs/outputs, and describes the layers used in the framework. The Xcode IDE has the capability of creating Objective-C or Swift wrapper classes (as the case might be), as soon as the model is included in an app project.

    4. Understanding Vision

While ARKit and Core ML were the frameworks that grabbed most of the headlines in WWDC 2017, the arrival of a new computer vision and image analysis framework – appropriately named Vision – has been equally important. Vision would work along with Core ML, and will offer a wide range of feature detection, scene classification and identification features – right from ML-backed picture analysis and face recognition, to text and horizon detection, image alignment, object tracking and barcode detection. The wrappers for the Core ML models are generated by the Vision framework as well. Developers have to, however, keep in mind that Vision will be useful only for models that are image-based.

Note: Just like the other two frameworks, Vision also works with the SDKs of iOS 11, tvOS 11 and macOS 10.13 beta.

  1. Supported models

    The Core ML Model, as should be pretty much evident from our discussions till now, is THE key element of the Core ML framework. Apple offers as many as 5 different, readymade Core ML models for third-party developers to use for creating apps. These models are Places205-GooLeNet, Inception V3, ResNet50, SqueezeNet and VGG16. Since Core ML works within the devices (and not on cloud servers), the overall memory footprints of these models are fairly low. Apart from the default-supported models, the new API is supports quite a few other ML tools (libSVM, XGBoost, Caffe and Keras).

Note: Whether a model is to be run on the GPU or the CPU of the device is decided by the Core ML framework itself. Also, since everything is on-device, the performance of machine learning-based apps is not affected by poor (or unavailability of) network connectivity.

  1. The limitations of Core ML

    There are no doubts about the potential of Core ML as an immensely powerful tool in the hands of developers to seamlessly add efficient machine intelligence for apps that would be usable on all Apple hardware devices. However, much like ARKit, this framework too seems like slightly undercooked on a couple of points. For starters (and this is a big point), Core ML is not open-source – and hence, app-makers have no option to tweak the API for their precise development requirements (most other ML toolkits are open-source). Also, in the absence of ‘federated learning’ and ‘model retraining’ in Core ML, the training data has to be manually provided. The final release of iOS 11 is still some way away, and it remains to be seen whether Apple adds any other capabilities to the framework.

Tree ensembles, neural networks, SVM (support vector machines) and regression (linear/logistic) are some of the models that are supported by Core ML. It is a framework that will make it possible for iOS developers to consider making apps with machine learning as one of their most important features. Core ML has been hailed by Apple as ‘machine learning for everyone’ – and it certainly can bring in machine learning (ML) and deep learning (DL) as an integral part of iOS app development in future.

App Entrepreneur Hussain Fakhruddin Talks About His Role As A Coach At Teksmobile

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)

(This post was originally published as a press release over here)


In a recent exclusive interview, noted app evangelist Hussain Fakhruddin – the CEO of Teksmobile (Australia| India| Sweden| USA) – reflected on his role as a coach and a motivator for his colleagues over the years. The startup recently completed 11 years of existence, and as Mr. Fakhruddin highlighted, human resource is the greatest asset that the company has in possession.

The Teks team, with CEO Hussain Fakhruddin


Here is the transcription of an excerpt from the interview:

“I like robots, but only in story books and movie screens. When I had conceptualized a tech startup some 11 springs back – one which would have the capacity to challenge the biggest of mobile app companies in the world – I was clear about one thing: I did not want robots in my company. From the very start, Teksmobile has had the good fortune of having diverse and interesting personalities in its fold – and I daresay that they, with a little bit of guidance and coaching from yours truly – have been instrumental in forming the Teks success story.

Take the case, for instance, of the man who performs the dual role of Key Accounts Manager and the Office Admin at my office. A true-blue workaholic, he goes about his job everyday with a smile on his face, and an easy-going confidence that is well and truly infectious. The first thing I had noticed when he came in for interview at Teks was the steely determination in his eyes and a willingness to embrace challenges. Given the range of responsibilities that this post brings with it, I was initially apprehensive about finding an employee who would be good enough for it.This person has met my expectations, and then some.

As an entrepreneur – although I prefer to think of myself more as a head of a work-family – I try to maintain a pleasant demeanour all the time. This is one quality that I share with the person I am talking about. He is always available for a chat, has the patience to listen to every issues and grievances of others, is game for every challenge – from something like sorting out a faulty office wifi router, to managing important corporate documents. For this guy, weekends do not make a difference – and evidently, doing a job – and doing it well – is all that matters. A trusty, intelligent, dedicated sidekick of mine!

Mr. Fakhruddin at MWC2017

Then there is the senior iOS developer, who has been a member of Team Teks for close to a decade now. When in the zone, he talks very little, often has messed up eating times, and has a wacko gameface on…you will never know whether he is trying to debug a piece of code or watching a particularly intense film! He takes every new project as an opportunity – an opportunity to expand his horizons, to learn more, and to prove his tech skills all over again. At times, he seems less like a software engineer, and more like a soldier in a fight.

So this person is all serious almost the entire time he is at office, but can you mark him as just another geek? You absolutely can’t – and the man’s diverse interests caught my attention from the very start. Put a DSLR camera in his hand, and the hidden photography-lover inside him comes to the fore – as he would lovingly check out its buttons and lens and what-not, with a tale or two about the basics of photography ready on his lips. He is also someone the junior developers look up to…someone who can be relied upon to help others, no matter how big a coding-related problem might seem to be. The guy is a big-time lover of travelling too, and of course, has a great many snaps (all taken by himself) from places from all over the world. The most impressive thing about him is the way he constantly tries to learn and improve his skill-sets, while not letting go of things outside work that he is passionate about. If ever I need a person to bounce new ideas from, the guy is my best ally.

Every team has a funnyman, and at Teks, the head of the graphics department dons that role. Meet him outside office, and he is going to regale you with some of the most hilarious PJs and strange stories…often about himself. He is also an expert in leg-pulling, always with a prank or three up his sleeve. The man is not shy of having fun at his own expense – often bringing the house down with his ‘horrific’ (there’s no other word for it!) singing skills.

Team Teks celebrates Independence Day

With this person, it’s a classic ‘Dr-Jekyll-And-Mr.-Hyde story’. When he first joined the Teks team, all that I saw was a bundle of energy, who was just very, very good at his job. All that I had to do is channelise this energy and give him a platform to showcase his skills. ‘Creativity’ is the middle name of this guy – as he continues to add life to graphics and images and animations, for apps and websites, and promotional stuff, and practically everything else. Over the years, I have actually increased his responsibilities gradually – and he has taken to the new tasks like a duck to water. He also doubles up as a mentor to our in-house animators and game developers. A classic example of how you can bring your A-game to work, without ever having to sacrifice your inherent ‘joie-de-vivre’. Inspiring, indeed!

Every human being has their own, discrete personalities and traits – and I keep reiterating the importance of holding on to that, for everyone at Teksmobile. There is that one intern who joined as a part of the content team, who is now one of the senior app testers at office. The nickname for one of our senior PHP developers is ‘slow-motion’ – but while at work, he is among the fastest to get things done, and his eye for quality is outstanding. A couple of Android developers have taken up the challenge of temporarily moving overseas (Germany) and working from there. Then there is the curious case of our digital marketer – who gets totally zoned out when a keyboard is within reach – and often makes others wonder about the reason why he does not even get up to eat! The project managers who are always on the lookout to keep track of tasks, the HR lady who does much more than just schedule interviews and keep track of leaves, the animator who would rank among the biggest movie fanatics ever, the iOS developer who takes on the leadership mantle whenever someone is roasted at office (generally during the monthly birthday parties) – each person brings his/her unique qualities to office, and I feel that it is this diversity that has helped Teksmobile assume its current stature as a market leader.

My company is a great example of the whole unit being greater than the sum of its parts. I feel proud…whenever I see my team of diverse individuals casting their differences aside to work together, bringing more success to the Teks brand in the process. 11 years ago, I had a dream of heading a tech startup – and these people have shared my ambition in their own ways, to give shape to my ambition. I have given them the required guidance and advice whenever required…but at the end of the day, they deserve every bit of recognition for their stellar work for the ‘Teks Family’. I should take this opportunity to thank our partners from Sweden, Australia and the US as well.

Mr. Fakhruddin with clients

At Teks, it is all about showcasing your talents, shaping your own luck, and becoming ‘bigger and better’ than ever before. Far from being robots, my employees are a team of disruptors – steadfastly refusing to follow preset norms, and constantly driving technology forward. And that’s just what I want them to be!”


Teksmobile is, at present, one of the top cross-platform mobile app and API development companies worldwide. The company is also working on projects based on cutting-edge technologies like VR/AR, internet of things (IoT), artificial intelligence (AI) and smart agriculture. To know more about the company, visit:



Apple HomePod vs Amazon Echo: How Well Does Apple’s New Connected Speaker Stack Up?

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)

The smart home speaker market is no longer a straight shootout between Amazon Echo and Google Home. At this year’s Worldwide Developers Conference (WWDC), Apple made its rather-delayed entry into the the domain of connected speakers – by announcing the multi-featured Apple HomePod (it will hit the markets in December). Given that more than 100 million smart speakers will be shipped by 2024, generating revenues of more than $14 million, the HomePod does have ample scopes to make a mark in this market. For that to happen though, it has to match up to the challenge of Amazon Echo (primarily), which debuted in 2014, and is currently by far the most popular AI-based smart speaker in the market (has a 3X lead over Google Home). We will here do an Apple HomePod vs Amazon Echo analysis, and try to determine which of these connected speakers comes out on top:

(Note: Amazon Echo has been in existence for over a couple of years, while Apple HomePod is yet to be launched. The comparison, if required, will be updated after the latter is commercially released)

  1. Speakers & Microphones

    Apple HomePod has been positioned primarily as a high-quality audio device – and it certainly has the edge as far as the built-in microphones and sound configuration architecture (upgraded Sonos-like speakers) are concerned. Each of the 7 tweeter speakers of the HomePod has its very own ‘custom amplifier’ setup (along with the 4” woofer). The single woofer (2.5”)+speaker combination of Amazon Echo rather pales in comparison with the much more advanced setup of Apple’s speaker. Also, Homepod has six far-field microphones (Amazon Echo has seven), along with a low-frequency mic. Casual listeners might not quite get the subtle improvements in sound quality that the HomePod will offer – but for the discerning users, it can well be a significant factor.

  2. Visual appeal

    By 2020, nearly 3 out of every 4 homes in the United states will have connected speakers (at least one). In other words, smart speakers are well on their way to becoming mainstream – and physically, they need to blend well with the actual room decors of end-users. Once again, Apple HomePod – with its typically minimalistic designs – would have the edge. It is less tall than the Amazon Echo (<7” compared to >9”) and is covered with the speaker grille. There is a glowing area at the top, just like Alexa blue that glows on the Amazon Echo when it is being talked to. The Echo is rather too conspicuous with its crisp cylindrical structure – and can tend to stick out in a room.

  3. Processor performance

    Apple HomePod is powered by the proprietary A8 chip (which debuted on iPhone 6/6 Plus a couple of years ago). It will allow the new speaker to deliver superior-quality audio performance, customized for different locations/rooms – thanks to the capability to analyze spatial data. Amazon Echo, which has the powerful DM3725CUS100 digital media processor, generally offers the same audio quality everywhere, and is not affected by locational changes. Make no mistake though – the audio quality of Amazon Echo is excellent, and the HomePod – with all its elaborate settings – will have a tough job of surpassing that.

  4. Utility as a well-rounded smart home device

    The Cupertino company has tried to avoid a head-on tussle with Amazon Echo, by positioning the HomePod as a ‘music first’ smart speaker (Siri was referred to as ‘musicologist’), and focusing primarily on the audio features of the device. There is built-in support for the HomeKit platform, allowing users to adjust room temperatures, switch on/off lights and other appliances, access weather information and perform other smart home tasks. There is no HomeKit-like hub for Amazon Echo, which relies on its third-party set of ‘Skills’, to provide a vast range of services for connected homes. Still, it seems that the target group of users for the two smart speakers will be different – those more interested in AI home assistants (with audio as an afterthought) will go for Amazon Echo, while people who are more concerned with the music/sound capabilities will consider the Apple HomePod.

Note: Amazon has also partnered with Samsung for the integration of SmartThings in Echo.

     5. Siri vs Alexa

Apple’s much-loved AI digital assistant Siri is getting smarter with time. It can offer contextual search options, translation services and a selection of other advanced features to HomePod users – tasks that Amazon’s Alexa is not equipped to perform. For regular web searches, information access and even timer-setting activities, the efficiency levels of Alexa and Siri are roughly the same (although the HomePod will be more likely to accurately understand voice commands in a crowded, noisy room). Alexa is a powerful AI digital assistant, but Siri on HomePod has the potential to be just a bit better.

     6. Trigger words

Amazon Echo offers greater options to users, when it comes to choice of ‘wake words’ or ‘trigger words’ to activate the device. ‘Alexa’ is, of course, the default word to ‘wake up the device’, but people can also use ‘Echo’, ‘Amazon’ and even ‘Computer’ (added in January 2017; a nod to ‘Star Trek’ fans, perhaps?) to get started in Echo. On the other hand HomePod will have the single ‘Hey Siri’ phrase to start up the HomePod. Not only are the options to ‘call’ the Amazon speaker more, it also seems natural to say its ‘wake words’ repeatedly, than having to say ‘Hey Siri’ (or, for that matter, ‘Ok Google’ for Google Home) many times.

    7. Platform and device compatibility

Although the usability of Apple HomePod will be limited to the iOS platform only (Amazon Echo can be paired with both Apple and Android phones), the extensive range of popular Apple home devices (iPhones and iPods and Macbooks and iPads) hand an advantage to the former. It will that much more easily integrable in the regular setup of smart devices used by people. Amazon Echo, on its part, has only its own speaker for playing music through (the smaller Echo Dot can be attached with speakers via 3.5 mm ports or via Bluetooth). Apple also has the option of enabling transfer of music/video files from the HomePod to Apple TV (as Google does for Home and Chromecast). If this feature is indeed incorporated, using Apple’s speaker would become really easy.

   8. Support for music stores

Amazon Echo fairly blows away the HomePod in terms of third-party music support. Users can stream music from Audible, Pandora, TuneIn, Spotify and iHeartRadio, in addition to Amazon Music and Prime Music, on the Echo speaker. In contrast, the HomePod will only have Apple Music to start things off. The audio experience on the new speaker will be more customized and (hopefully) of a better quality – but the sheer range of music support on the Echo makes it win this round hands-down.

   9. Multi-room functionality

Apple HomePod has this, while Amazon Echo does not. Apple announced the AirPlay 2 wifi standard last month – and that will ensure smooth multi-room support for the smart speaker in particular, and the HomeKit platform in general. Amazon Echo, in its present form, does not have any such comparable feature. However, there is a corollary to this: the support for third-party apps is very limited on AirPlay 2 (understandably, with it being a new standard). For regular single-room functions, the Echo does not have any such limitations.

Note: Multi-room support is offered by Google Home as well.

  10. Integration with third-party applications

Google Home arrived last year, and already has a fairly impressive list of supported third-party apps. Amazon Echo, by virtue of being the oldest player in the market – offers even more in this regard, with support for regular, essential applications like Sky News, National Rail and Uber. Apple HomePod is the new kid on the block, and it will take some time to build a network of supported apps. It can be reasonably expected that between now and December, there will be news of several new apps becoming available on the HomePod. For the moment though, it’s advantage to Echo in this context.

   11. Market share

The stranglehold that Amazon Echo has over the worldwide smart speaker market won’t make things easy for the HomePod. On average, 7 out of every 10 connected speakers sold is an Echo device – and there are, at present, well over 8 million people using this home speaker. The combined sales figure of Amazon Echo and Google Home will nudge towards 25 million units by the end of this year – a significant figure in a market that is not really wide yet. However, the smart home market is expanding rapidly – and if the Apple HomePod is as good as many tech enthusiasts feel it has the potential to be, there will be a market for it.

   12. The price factor

Apple has always been a company that makes ‘premium products’ (let’s forget about the icky iPhone 5C for the moment!). The Cupertino tech giant has retained that approach for the upcoming HomePod, which will be priced at a hefty $349 in the American market, nearly double the price tag of $179 for Amazon Echo. What’s more – for the consumer looking for a smart speaker that offers plenty of cool functions as well as an affordable price, Google Home ($129) can seem to be the best alternative. The cost of the smaller Amazon Echo Dot (second generation) is as low as $50. Apple has been trying to market the HomePod as much more than a smart speaker (hence the focus on audio/music, and less emphasis on Siri/AI capabilities) – but the much higher price point will be barrier for customer adoption – particularly since similar (and arguably, equally good) devices are available at much cheaper rates.

Compared to Amazon Echo, the form factor of the Apple HomePod has a slightly more bulky feel about it (Homepod weighs 5.5 pounds, while the weight of Echo is only 2.3 pounds). The more advanced mic and speaker setup of HomePod should offer better audio quality – but it remains to be seen whether that will be enough to motivate users to fork out the considerably higher price. There are no rooms for doubting that HomePod has several top-class features – but managing the steep price tag will be a big challenge. At the moment, it appears that HomePod will find favour among those who are already invested in the Apple device ecosystem, while for others, Amazon Echo (or Google Home) will remain the preferred choice.

We are still months away from the launch of Apple HomePod. A lot can change in the interim, and the new speaker might well get new features that enhance its overall attractions.



Drones In Agriculture: 15 Key Facts & Trends

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)


An analysis of the use of drones in agriculture


The popularity of drone technology is soaring higher every quarter. The total number of drones produced this year is expected to reach 3 million – marking a ~40% YoY increase over 2016. Revenues from the usage of drones is climbing rapidly too, and according to a recent Gartner report, will go beyond $11 billion by the end of 2020. Agriculture has emerged as one of the most important fields for the application of drone technology, with the focus squarely on refinement and advancements in precision farming standards. The CAGR of the agricultural drone market is estimated to hover around 28% for the next 4-5 years – with its value nearing the $3000 million mark by 2021. In what follows, we will highlight on some latest trends and points of interest related to farming drones and their uses:

  1. Smarter crop planting

    Drones have the capacity to generate big savings for farmers. A classic instance of this is related to the task of planting seeds/crops on fields. Automated unmanned aerial vehicles (UAVs) are increasingly being used to place nutrients as well as pods/seeds in the soil – making the overall process much quicker (compared to manual planting), and bringing down the average expenses of planting by a whopping 85%. Planting drones can deliver uptake rates of up to 70%-75%, and are ideal for ensuring better sustainability of crops.

  2. Evolution of farming drones

    Till as late as 2015, the functions of drones in agriculture were limited. Most drones were simplistic imaging devices, and were used to take hi-res photos of farmlands. The aerial images were referred to by farmers to get an idea of crop health, weeding requirements and other basic farming activities. Things have changed a lot over the last few quarters, and the latest agricultural drones are all about delivering ‘actionable intelligence’ to users. NIR (near-infrared) sensors are being used to create accurate crop health maps, by tracking the green vegetation mass around crop areas. Also known as NDVI (Normalized Differentiation Vegetation Index) maps, these reports help in identifying areas where there are chances of yield losses. Aerial photography is still an important feature of farming drones – but the latter are currently used for many other purposes as well.

  3. Types of drones for agriculture

    Drones have come a long way from being mere recreational gadgets, with mediocre flight planning power and lowly payload capacities (range also used to be a factor). At present though, drones are finding extensive adoption in many fields of business, with agriculture being one of the most important domains. Farming drones can broadly be classified under two heads – multi-rotor drones and fixed-wing drones. The former is particularly useful in scenarios where low-altitude flying is required to capture high-quality crop photos and related information. Fixed-wing UAVs, on the other hand, do not require prior landing/takeoff plans, can start and end flights vertically, and are generally much easier to manage.

  4. The cost advantage

    One of the main drivers of the proliferation of drone technology in the agricultural sector has been the extremely competitive cost levels. The average farmer can purchase many types of smart farming drones at sub-$1000 levels – which is considerably cheaper than hiring manned aircrafts for crop photography (the hourly rental rates of aircrafts are likely to be higher than the price of drones). The images captured by drones are typically of higher resolution than those taken with the help of satellite imaging tools. Drones experience minimal interference in their flight paths too – thanks to the fact that they move under the clouds. On both the quality as well as the cost fronts, agricultural drones offer significant advantages to farmers and investors.

  5. Flying high

    Depending on their precise nature and objective(s), the height at which farming drones fly varies from 50 meters to 100 meters. The Federal Aviation Administration (FAA) mandates that UAVs cannot move beyond the edge of line of sight (in the US, special permits might be required for drones flying at a height of >120 meters). In addition, there are other national-level rules and regulations that the drone operators have to abide by. The average wingspan of an agricultural drone is around 1.2 meters, and its weight varies in the 1.5-2.0 kgs range.

  6. Components of farming drones

    Agricultural drones follow automated flight paths (artificial intelligence drones are, by definition, ‘unmanned’). Open-source programs are typically used to autopilot these drones – and they have several other important components. At the core of farming drones are cutting-edge microelectromechanical (MEMS) sensors – which receive/transmit data to and from the farmlands on a real-time basis. Different types of sensors are used – ranging from regular pressure sensors, to the more advanced gyrometers, accelerometers and magnetometers. To improve locational accuracy, powerful GPS modules are built-in, while high-capacity processors are used to power the drones. Small digital radio(s) are generally attached to agricultural drones as well.

  7. Facilitating smart irrigation

    Wastage of water is a point of concern for practically every crop-grower. The efficiency of traditional irrigation system is hardly ever more than 40% or 50% – implying that an alarming amount of water is wasted during every crop-cycle. Farming drones do their bit to improve the standards of irrigation in fields. Users, with the help of the thermal and multispectral sensors of these drones, can pinpoint the areas of the fields that have to be watered (heat signature is collected, along with information on the energy generated by crops). As already mentioned earlier, crop vegetation indices are also calculated by the drones – to keep farmers informed about the general health of crops.

  8. Distance of flight

    The range of flight of a drone varies with its size and built-in features/purposes. Fixed-wing agricultural drones generally have more coverage capabilities than the multi-rotor models – with the former requiring less than an hour (~50 minutes) to cover 12 square kilometers. The average flight times of fixed-wing drones are higher as well. Spot-checking entire farming lands is time-consuming and often inaccurate when done manually (particularly for large fields where simple perimeter checking is insufficient). Drones have enough in-built flight capacities to perform micro-surveillance of all types of agricultural fields – quickly and way more accurately.

  9. How do agricultural drones work?

    We have already talked about the key components of farming drones. Let us now quickly get an idea of how these UAVs function. The flight path of a drone is created by the user on a ground control device (a laptop or a smartphone). The line of flying – indicating the total area that has to be scouted/surveyed by the drone – is drawn on a map (Google Maps), and the information is transferred/uploaded wirelessly from the ground control tool to the UAV. The drone follows this flight path, and the user has the option to perform manual overrides in case any sudden emergency crops up (for instance, an aircraft appearing in the drone’s path). The takeoff and landing of AI-based farming drones are, of course, autonomous and can be monitored remotely.

  10. Uses of drones in agriculture

    The primary objective of using drone technology in agriculture is pretty much straightforward: increasing overall output levels and enhancing crop quality standards, while maintaining input requirements and optimizing all available resources. Apart from crop planting and smart irrigation, drones have already started to be used to tasks like crop monitoring/scouting (through high-quality time-series animations), crop health assessment (with NIR sensors as well as visible lights for tracking plant health and detecting sicknesses), in-depth soil analysis (with the help of 3D maps) and crop spraying (distributing liquid chemicals evenly on the farmlands, after real-time ground scanning and distance calculations). Drainage systems can be monitored with drones as well, while tracking the health and grazing habits of livestock is also a possibility. Farming drones also help users draw up detailed prescription maps to manage variable rate crop prescriptions. Yield loss risks are minimized as well.

  11. Main challenges for farming drones

    For all its merits, agricultural drones are still new – and some uncertainties still remain over their utility and efficiency levels. Farmers need to keep themselves updated with the latest changes and updates in the drone regulations of their respective countries. Correctly deploying drones in a farm also represents a challenge, while the overall costs of integration have to be managed too. Another serious point of concern is the distinctly ordinary battery performance of most farming drones (which puts their coverage abilities under a cloud). What’s more – unless an agricultural drone actually offers end-to-end problem detection, information transfer, detailed analytics and prescriptive action suggestions (as opposed to simple aerial photography only), neither investors nor end-users would be willing to check it out. Drones in agriculture have evolved greatly, but there is still a long way to go.

  12. Hardware and software

    Between 2017 and 2024, the volume of shipments of farming drone hardware and software will grow at a CAGR of more than 13%. The value of the drone hardware market will reach $200 million (up from ~$60 million in 2016), while that of drone software will be more than $50 million. One of the main factors behind the relatively faster rate of growth of the hardware segment is the higher costs of the device components. While multi-rotor and fixed-wing drones are both popular, shipments of hybrid drones (primarily used for covering large agricultural fields) are also increasing at a steady rate.

  13. Different views offered by drones

    The type of information generated by drones can be customized to suit the exact requirements of the modern-day ‘smart farmer’. Broadly speaking, three different ‘views’ can be obtained from farming drones. The first view takes care of repeated monitoring requirements of a crop or a particular section of the field, on a daily, weekly or monthly basis. The second (and perhaps the most common) view deals with regular crop monitoring from above – for tracking crop health, identifying problems (soil dryness, plant diseases, fungal attacks, etc.), and suggesting satisfactory remedial actions. The other view of farming drones is all about distinguishing between healthy and sick plants with the help of multispectral images (which combine data from visible and infrared spectrums). All the drone views and services are available on-demand (unlike most satellite imaging techniques), near-real-time – and high on quality.

  14. Drone are user-friendly

    While they bring in technological innovation in a big way to farming techniques, agricultural drones are typically very easy to manage. These UAVs can be seamlessly integrated in the crop-monitoring routine in farmlands, the operations and controls are simple (and becoming even simpler as the drone technology develops further), and deployments can be done promptly, on an ‘as-and-when-required’ basis. The sheer range of services that farming drones can deliver makes them highly valuable for users – and in most cases, there are no reliability or safety-related concerns either. The upfront investment is not exorbitant, which adds to the convenience of the farmers. Agricultural UAVs, when optimally deployed, can also lead to hefty savings – justifying its already reasonable cost levels.

Note: The cost of drones with many highly advanced capabilities can be as high as $3500/ €3000. The immediacy of the services of agricultural drones is a big advantage.

    15. USA at the forefront

The United States has a healthy lead in the worldwide market of farming drones. Last year, one-third of the total revenue generated from drones in agriculture came from the US alone. Countries in the Asia-Pacific (APAC) are also reporting rapidly increasing adoption of farming drones in particular, and precision agriculture/smart agriculture in general. In the European markets too, agricultural drones are increasingly finding favour. Chinese company DJI – the company whose first farming drone raised a staggering $75 million – is the undisputed leader, with nearly 37% share in the American drone market. Trimble Navigation, AeroVironment, GoPro and DroneDeploy are some of the other biggies involved in making agricultural drones. The space is getting more and more competitive.

Thanks to the enhanced water-resistance of farming drones, they can be used in practically all types of weather conditions (there is an outside chance of heavy rains distorting drone images though). Their value lies in the ability to add a dedicated ‘what is happening right now’ layer to on-field monitoring – ensuring that farmers are always kept in the loop. Agricultural drones typically save time and money of users, and learning to use them is not much of a challenge – provided that adequate training is available. Smart agriculture is becoming more data-driven than ever before…and drones can indeed play a mighty important role in taking farming standards to the next level.



Artificial Intelligence 3.0: 13 Things To Know About Deep Learning

Hussain Fakhruddin
Follow me

Hussain Fakhruddin

Hussain Fakhruddin is the founder/CEO of Teknowledge mobile apps company. He heads a large team of app developers, and has overseen the creation of nearly 600 applications. Apart from app development, his interests include reading, traveling and online blogging.
Hussain Fakhruddin
Follow me

Latest posts by Hussain Fakhruddin (see all)


Deep learning: Features and capabilities


Amazon, Google, Netflix, Facebook, MIT researchers – the lineup of ‘Deep Learning’ (DL) users is expanding every quarter. The yearly market revenue from deep learning software for enterprise applications is expected to go beyond the $10.5 billion mark by 2024 – up from the sub-$110 million figure in 2015. As per a recent study, the total annual income from all types of deep learning tools (software+hardware) is set to touch $100 billion by the end of 2024. The buzz around deep learning is enormous – and the technology has well and truly emerged from the realms of science fiction and ‘just that tool that is very good at the ‘Go’ board game’.

Before we get into analyzing the main points of interest about ‘deep learning’, it would be prudent to get its concept clear. Although the terms ‘artificial intelligence’, ‘machine learning’ and ‘deep learning’ are often used synonymously, the three are far from being one and the same. In essence, ‘deep learning’ can be referred to as Artificial Intelligence 3.0 (third-gen artificial intelligence, if you will) – a subset of ‘Machine Learning’, which itself is a subset of the broad concept of AI. While AI is all about creating programs that help machines display ‘human-like intelligence’, ‘machine learning’ refines things, extracting features from a starting object (picture or text or audio or other forms of media) and then forming a descriptive/predictive model. ‘Deep learning’ adds another layer of efficiency and sophistication, by doing away with feature extraction step, and enabling the analysis of objects with the help of customized deep learning algorithms. To sum up:

Artificial Intelligence → Machine Learning → Deep Learning

Let us now turn our attentions to some interesting facts to get a better understanding of the ‘deep learning’ technology:

  1. Not a new concept

    Although the breakthroughs in ‘deep learning’ are often viewed as part of a recent phenomenon, the actual concept is not exactly a new one. In 1965, the first ‘deep learning’ algorithms (perceptrons with several layers for supervised learning) were created – and the concept was used by London’s World School Council later on. Ivakhnenko and Lapa were the creators of this first DL algo. The computer identification setup in which the idea was used was called ‘Alpha’. Of course, the technology has evolved greatly from those days – and is currently right at the heart of internet of things (IoT).

Note: The origin of the term ‘artificial intelligence’ can also be traced back to more than 60 years ago. It was coined at the 1956 Dartmouth Conferences by a team of computer scientists.

  1. Works like the human brain

    The way in which ‘deep learning’ models are trained has a lot in common with the working mechanism of our brains. Hence, the underlying computing model in DL applications can be explained through neural networks (artificial). Data is seamlessly transferred/transmitted within these networks by automated neurons. The neural network of a ‘deep learning’ model ‘learns’ things on the basis of gaining ‘experience’ from sample data and observations – and the behaviour of the neurons undergo changes as the model gets more ‘experienced’. The principal purpose of the neural network in particular, and the DL model in general, is the accurate calculation of unknown functions (e.g., differentiating between a cat and a rabbit) from labeled data. As DL gets more and more advanced, correctly identifying the differences between relatively similar objects is becoming possible.

  2. Input requirements

    The value of deep learning lies in its accuracy. In select use cases, the performance of DL software can be much higher than human capabilities. However, there are two primary conditions for the optimal performance of deep learning applications. High-power graphic processing units (GPUs), which have a large number of built-in processors, are required for working with DL software. That, in turn, brings the importance of parallel training for GPUs to the fore. Secondly, for the results generated by deep learning to be of any real value, the actual volume of ‘labeled data’ or ‘sample data’ has to be huge. It’s similar to basic statistics – the greater the ‘sample’ size is, the better will be the ability of a model to predict from the ‘population’ or the real-world.

Note: For a complicated and potentially risky task like autonomous (driverless) driving, hundreds of thousands of pictures and videos have to be fed to the concerned DL algorithm.

  1. The importance of structured data

    In the previous point, the value of large sample data for deep learning models was highlighted. It has to be kept in mind that the data we are talking about here refers only to ‘structured data’ and not just any random pieces of sound or text or photos. When a business plans to implement deep learning in its IT system – it has to make sure about the availability of huge collections of such structured, organized data first. The deep learning technology processes this data to arrive at judgements (which can vary from voice identifications, to reading traffic signals, or anticipating the opponent’s next move in a board game).

  2. Availability of deep learning frameworks

    There are plenty of misplaced notions and myths about deep learning. For starters, it is often considered that DL is a tool for academicians and researchers only, and only people with advanced degrees and decades of experience can sink their teeth into it. The actual scenario is almost the reverse – with deep learning having multifarious practical uses, and there are plenty of infrastructures, networks and frameworks that can be utilized for DL training and implementation. What’s more – many of these frameworks can be used by academicians as well as developers without any problems whatsoever. The easy availability of extensive documentation on DL frameworks eases the learning curve for developers further. Many of the existing frameworks are free-to-use as well.

Note: Theano, TensorFlow and Caffe are some of the central frameworks that are used for deep learning models.

  1. Way of working

    The overall function of deep learning models can be explained in two broad steps. In the first, the suitable algorithms are created – after thorough analysis and ‘learnings’ from the available data (remember, feature extraction is here automated, unlike in machine learning). The major characteristics and traits of the object under scrutiny can be described by this algorithm. These DL algorithms are then used in Step 2, for the identification of and predictions from objects on a real-time basis. The nature and quality of the training set/sample dataset determines the quality of algorithms generated – and that, in turn, affects the accuracy/efficiency of the output.

  2. Expenses involved

    As the scopes of artificial intelligence are expanding, the revenue-earning capacities of ‘insights-driven businesses’ (those that rely heavily on IoT and AI-based processes) are going through the roof. On a year-on-year basis, the total investment on AI tools and processes this year will be a staggering 300% more than the corresponding figure in 2016 (according to a Forrester report). Investments on deep learning software and devices are also rising rapidly. DL, as things currently stand, is a fairly pricey technology – mainly due to the requirement of expensive GPUs in its architecture. The cost of only the GPU graphics card can run up to several thousands of US dollars. Add to this the fact that separate servers will be required for most of these cards – and it can be clearly understood that implementation of ‘deep learning’ architecture involves rather steep expenses indeed. Over the next few years though, as the technology becomes more commonplace and the component prices fall, things should become a lot more competitive.

  3. The importance of transfer learning

    If a separate deep learning model (training sets, algos, hardware, et all) had to be created for every different use case – that would have been a problem, both operationally as well as financially. Thankfully, ‘transfer learning’ considerably eases the pain in this regard. To put it simply, ‘transfer learning’ (TL) refers to the practice expanding the scope of a specialized DL model to another (preferably related) use case. Another way of explaining this would be bringing in the capability of one DL model to provide the requisite training to another one. A classic example of TL would be the ability to identify genders, and the detection of male and female voices. Apart from reducing the number of deep learning models required, TL also helps in bringing down the total volume of sample data needed (for the similar yet different purposes).

  4. Number of layers

    The underlying neural networks (‘deep neural networks’, or DNN) for deep learning setups typically have multiple layers. The simplest of them has 3 layers (input, output, and a hidden layer – where the processing of data takes place) – while more complex networks can have as many 140-150 layers. The ‘deep’ or ‘hidden’ layers in the neural network of a DL model removes the need for manual feature extraction from objects. Although we are still quite some way from it, experts from the field of software and app development feel that deep learning has the potential to completely replace all types of manual feature engineering in the tech space in future.

Note: Convolutional Neural Networks (CNNs) are a good example of the deep neural networks that are used for deep learning models.

    10. DL for business applications

Product recommendations, image tagging, brand identification (both own as well as that of competitors) and news aggregation are all examples of business tasks that can be powered by deep learning modules. The biggest example of DL implementation in businesses, however, have to be for the AI-powered chatbots. These bots, with the help of cutting-edge deep learning capabilities, can easily simulate human conversations – ensuring 24×7 customer services (without companies having to hire additional manpower). AI bots are particularly useful for ecommerce portals, can store, analyze and identify patterns in received information, and can even facilitate secure payments. By the end of this decade, 80% of all firms will be in favour of chatbot integration in their work-processes – a clear indication of the rapid ongoing developments in the deep learning technologies that will support the bots.

    11. Key layers of the systems

Like any high-end software system, a DL model has multiple important layers. The rules and algos created after ‘studying’ the sample structured data are known as ‘training algorithms’, all possible functions can be analyzed by these algorithms in the ‘hypothesis space’, and the target function is identified by a set of data points which are named ‘training data’. The component of the DL module that actually performs the necessary action (e.g., identification) is known as the ‘performance element’.

   12. The scalability is an advantage

The volume of ‘learning data’ required and the requirement/non-requirement of feature extraction manually are important points in any deep learning vs manual learning (ML) analysis (ML requires much smaller datasets than DL). Yet another advantage of deep learning over ML is the enhanced scalability of the former. The underlying algorithms in a DL model can easily scale up or down, depending on the volume and type of data under consideration – with the performance remaining unaffected at all times. In contrast, the performance and effectiveness of machine learning (can also be referred to as ‘shallow learning’) tends to become flat beyond a certain level.

  13. Deep learning use cases

From identifying sounds and fully automated voice recognitions, to applications that boost the powers of computer vision and bioinformatics – the fields in which deep learning can be implemented are seemingly endless. In the domain of electronic tools and services, DL has already proven to be a handy tool for automatic translations (audio) and listening. Detection of cancer cells has been facilitated by deep learning too (it has been implemented in a HD data microscope at the UCLA). Smart driving and industrial safety & automation are two other fields where DL has enormous scopes to grow. The technology will make national defences and important aerospaces safer than ever before.

There are no scopes for doubting the fact that deep learning is the most powerful component/subset of artificial intelligence. Provided that there are no problems in acquiring enough structured data samples, and adequate investments can be made for GPUs – organizations can easily opt to integrate DL in their systems (in the absence of these resources, machine learning would be the better alternative). However, for all its advanced, futuristic capabilities – DL models still require the support of human beings, to resolve any possible confusions. Deep learning takes computing powers to the next level…without quite being able to act as a perfect substitute of the human brain. Yet.