Cyber Law issues in Smart cities

When ITA 2000 was drafted, the concept of “Smart Cities”, “Driver less cars” or “Artificial intelligence” or Humanoid Robots were not very much in the realm of the vision of the law makers. The main objective was to provide facilitation of E Commerce. In 2008 this was extended to provide some additional security against Cyber Crimes.

In 2008, the focus was on “Intermediary Liability” but still the vision was restricted to liability arising out of crimes occurring on E Commerce platforms and to what extent the owner of the platform should be held liable for the offences committed by third party users.

In the context of the Smart Cities, where there is a huge dependence of the infrastructure on “Automated Sensors” which collect data and pass it on to a central processor and the Central processor is programmed to take automated decisions based on the data input and send back operational instructions to decision enforcement mechanisms, there is a debate on whether ITA 2008 can address the new challenges thrown by the Smart city eco system.

In this process, we have legal queries on whether we are violating “Privacy” while our sensors collect information, whether mistakes committed by our “Central Processors” armed with Big data analytic capabilities using Artificial intelligence are punishable as cyber crimes, etc.

The recent Uber autonomous car accident in Arizona has highlighted the consequence of failures by the sensors or the processing systems. Big Data Analysis which takes raw data from some source and adds intelligence to it to make it more useful information for third parties has raised issues of “Ethics” as we see in the Cambridge Analytica case.

It is interesting to note that without any inclination of such possibilities, ITA 2000 provided that “An action by an automated system is attributable to the person who caused it to behave automatically”. By this one section, all actions of automated systems have been brought under legal scrutiny just as if some human was sitting there and operating the system though he might have used an algorithm as a tool. Such person could be the owner of the system like Uber in the Arizona case.

It is open to Uber to hold the software developer or the sensor manufacturer for their part of failure of the warranty depending on the contractual obligations. Under Section 79 of ITA 2000/8, read with Section 85,  criminal punishments can also be imposed on the intermediaries and their executives for the adverse action by the automated systems.

If therefore in a smart city, automated systems cause any accident, Indian law has some body to be held accountable.

As regards the Big Data analytics, current practice is to depend on the “Consent” obtained by the “Data Collector” who collects the personal data.

If the data collector adds value to the information then the right over the value addition is claimed by the person who added the value. This is recognized under the IPR. The value added information is different from the raw data handed over by the data subject and hence the contract of data collection has to specify if the data subject permits creation of value over the raw data provided by him and whether he is entitled to any benefits there of. Otherwise he may not be able to object to the value creation.

Naavi has recommended earlier that personal data should be treated as a property and could be made transferable for a consideration with a royalty payable to the data subject if value sis encashed by the data collector. However a proper mechanism does not exist for this purpose and hence the value adder is free to make profit on the basis of the raw data supplied by the data subject.

However, when the value addition processing of personal data leads to creation of any “Profile Data” which is used in such a manner as to defame the data subject it may be considered punishable whether or not there was a consent or whether the data was collected from the data subject or from a third party.

The “permission to transfer” and the “Conditionalities of such transfer” inherent in the consent determine whether the Data analytics becomes a “privacy issue” or not.

The damage created by an aggregator or processor of data to the data subject is not much different from the damage that may be created by a malicious person who may hack into CCTVs or other devices of another owner and use it for unauthorized surveillance or DDOS attacks. With Smart cities using CCTV and other monitoring devices in plenty, it is a fertile ground for misuse by hackers if the security is weak.  The legal implication of such damages (eg Dyn Attack) is determined under Section 43A of ITA 2008 which imposes “Reasonable Security Practices” on the owner of a device.

The data aggregators or value processors are however in the nature of “Intermediaries” and their liabilities will be determined with the application of the “Due Diligence” principles.

One Due Diligence aspect that can be considered when personal data is transferred to another person is to transfer the data along with the consent so that the down stream data processor is aware of the consent restrictions. But this again is not an established practice but can be considered.

Hence “Self imposed Ethical Standard” as due diligence is the only available means through which the down stream user of data can be expected to protect the privacy of a data subject with whom he does not have direct contractual contact.

Also, when data is transferred from one data collector to another data processor, if the data is pseudonomized, then the obligations of both the data collector as well as the down stream processor would be either absent or substantially reduced. This can happen in many instances of research but not when the processing intended to be used for marketing. But “Marketing” is almost always a category of use that is prohibited in any consent and hence can be considered as a “Presumption” unless the contrary is proved by an “Explicit Consent”.

When “Artificial Intelligence” is used in a Smart City scenario, the sensors (Including CCTVs equipped with face recognition or Gait recognition) are “machines” which collect the personal data. The “Privacy Breach” therefore is not evident unless the data is disclosed to a human being. As long as the data is being processed within the system, it is difficult to say if the “Privacy has been breached” though it could be a step towards eventual breach of privacy.

Again this is a grey area for law and we need to consider that just as we say “Privacy” is a right available only for “identifiable, living individuals”, we can define that a “Breach of Privacy” is recognized only when a “Living individual” accesses “identifiable personal data” without the consent of the data subject.

With such a definition, the Smart City processing can be largely relieved of the privacy obligations as any data which is collected can be filtered into “Suspect peron’s personal Data” and “Non Suspect peron’s personal data” with the non suspect person’s personal data being de-identified by the machine itself.

Only the “Suspect person’s personal data” may be escalated to human intervention and as long as the machine (or the person who owns its actions) can justify “Reasonable Doubt” as to why the data subject should be considered as a “Suspect”, Privacy breach may not be considered to have occurred.

Presently, these thoughts are being presented as an extension of the present laws. If this is universally accepted, then we may not need a separate Cyber law for Smart cities. If not, we may consider some amendments to ITA 2008 to add clarifications necessary to expand some of its provisions as may be required.

Naavi

 

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.