An Analysis of the Regulation of Children's Online Activities Under the Digital Personal Data Protection Bill, 2022

The DPDP Bill was tabled by the Ministry of Electronics and Information Technology on November 18, 2022, for comments. The purpose of the Bill was to provide for the processing of digital personal data in a manner that recognized both the right of individuals to protect their personal data and the need to process personal data for lawful purposes. Though the object behind the proposed DPDP Bill appears to justify the need of the hour, the DPDP Bill has imposed certain additional obligations with respect to children.


The internet has become an indispensable part of modern life. The significance it bears and the impact it has on young minds cannot be overstated. It provides them with access to a vast array of information and resources, including educational content, news, and entertainment. It also allows them to connect with others and form communities, whether it be through social media, gaming, or online forums. The use of the internet in day-to-day affairs of life has considerably grown over the past two decades. The leitmotif of this article is not to regurgitate the importance of the internet but to reflect on the intriguing debate over the regulation of the internet by parents with respect to children under the proposed Digital Personal Data Protection (“DPDP”) Bill, 2022.

The Gordian Knot

Section 10[1] of the proposed DPDP Bill deals with the processing of the personal data of children. The section states that ‘The Data Fiduciary shall, before processing any personal data of a child, obtain verifiable parental consent in such manner as may be prescribed’. Under the Bill, a child is defined as someone who has not completed eighteen years of age[2]. Every time a child creates an account, be it social media, gaming, or an OTT account, the Data Fiduciary[3] involved, which would be the platform providing the service, would necessarily have to secure the consent of the parent or legal guardian of the child before processing their data. The DPDP Bill also prescribes a penalty of up to Rs. 200 crores for its non-compliance[4].

The implications of this proposed section are vast. Currently, most social media platforms including Twitter, Facebook, and Instagram require the user to be above the age of thirteen years to create an account, without any requirement of parental consent. Practically speaking, these platforms do not verify the age as claimed by the user and thus, it is possible to provide incorrect age in order to create an account. The same goes for all other prospective Data Fiduciaries. From knowledge-providing platforms like YouTube and Quora to entertainment or gaming platforms like Spotify and Stream, all these platforms currently have set thirteen years as the minimum age to create an account and enjoy these services. To comply with the DPDP bill, in case it is passed, the platforms would not just have to modify their own terms and conditions for the Indian jurisdiction but also have to come up with a verifiable parental consent requirement mechanism. Since most platforms and websites on the internet require the creation of an account to access the features or services fully, enforcing Section 10 of the DPDP bill would require an entire overhaul of how the internet functions. There would have to be parental consent forms and verification mechanisms in almost all corners of the internet.

While mandating such monitoring of every online activity of a child might sound fit in an average conservative Indian household, it is important to understand that doing so fundamentally alters the very forte of the internet – accessibility to information. Curtailing this would have detrimental effects on any child’s development, by allowing the parents to restrict any chances of the child’s exposure to perspectives that might not agree with their own. This would also be in defiance of Article 13 of the Convention on the Rights of the Child[5], which India had signed and ratified on December 11, 1992. The Article promotes the “right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers” for children.

Untying the Knot

Perhaps one way to mitigate the issues that could arise if the proposed section is brought into effect is by introducing gradation in the age limit that it specifies consent for. In this respect, inspiration can be taken from the Indian Penal Code, 1860,[6] which categorizes children and provides for classification based on age (below 7, from 7 to 12, etc.) to determine the law applicable to them. Even the much popular General Data Protection Regulation, 2016 of the European Union allows member states to lower the age of the child to 13 years to determine if parental consent would be needed or not[7].

The rigidity with respect to parental consent should also be based on a model which considers the evolution and development of children at different ages. France’s model of children’s data privacy rights under the French Data Protection Act, 1978 which was heavily amended recently in 2018, could also be looked at. Article 45 of the said Act[8] introduces the concept of “Joint Consent”. It states that ‘If the child is under 15 years of age, the processing will be lawful only if consent is given jointly by the child and the holder(s) of parental responsibility over that child.’ This, in essence, means that the consent is based on a mutual agreement between the child and the parent(s) holding parental rights. With respect to children above the age of 15 years, the Act allows them to give their own consent.


Thus, while it is ultimately up to the lawmakers to resolve, they must keep in mind the logistical and sociological effects of enforcing mandatory parental regulation on children’s online activities. If not by reducing the age to a more reasonable one, as done by other jurisdictions, systems like gradation in age or joint parental-child consent should be put in place. In the case of Faheema Shirin R.K. vs State of Kerala[9], the Kerala High Court, specifically speaking in the context of students, stated that the right to access the internet forms a part of freedom of speech and expression guaranteed under Article 19(1)(a) of the Constitution. In the said case, it was held that ‘Enforcement of discipline shall not be by blocking the ways and means of the students to acquire knowledge’. The concept of “best interest of the child” which is much popular in custody and guardianship cases and puts the best possible alternative for the child before the rights of the parents, could perhaps be interpreted broadly and acknowledged by the lawmakers with respect to the present debate as well.


[1] Section 10, The Digital Personal Data Protection Bill, 2022.

[2] Defined under Section 2(3), The Digital Personal Data Protection Bill, 2022.

[3] Defined under Section 2(5), The Digital Personal Data Protection Bill, 2022.

[4] Section 25, The Digital Personal Data Protection Bill, 2022.

[5] Article 14, Convention on the Rights of the Child, 1989 [General Assembly resolution 44/25].

[6] Sections 82 and 83, Indian Penal Code, 1860.

[7] Article 8, General Data Protection Regulation, 2016.

[8] Article 45, French Data Protection Act, 1978.

[9] Faheema Shirin R.K vs State of Kerala, 2019 [WP(C)No.19716 OF 2019(L)].

Image Credits:

Photo by Pavel Danilyuk:

While it is ultimately up to the lawmakers to resolve, they must keep in mind the logistical and sociological effects of enforcing mandatory parental regulation on children’s online activities. If not by reducing the age to a more reasonable one, as done by other jurisdictions, systems like gradation in age or joint parental-child consent should be put in place.


Generative AI: Generating Legal Headaches?

The year 2022 saw major breakthroughs in the field of generative Artificial Intelligence. This field is different from the more traditional “discriminatory” AI models, whose algorithms rely on the datasets they are fed during “training” to make decisions. By contrast, “generative” AI models are forced to make conclusions and draw inferences from datasets based on a limited number of parameters given to them during training. In other words, generative AI uses “unsupervised” learning algorithms to create synthetic data. The output of generative AI includes digital images and videos, audio, text or even programming code. In recent days, even poetry, stories, blog posts and art work have been created by AI tools 

Generative AI: The Socio-Economic and Legal Problems

Like every technology, generative AI too has pros and cons. While it has made it easy to create various kinds of content at scale and in much shorter timeframes, the same technology has also been used to create “deep fakes” that then go viral on social media.  

OpenAI’s image generator platform “DALL-E 2” and automatic text generator GPT-3 have already been used to create art work and other text-based content. GPT-4, which is expected to be far more powerful and advanced, is expected to be released in 2023. Until recently, Open AI did not allow commercial usage of images created using the platform. But it has now begun to grant “full usage rights”- which includes the rights to sell the images, reprint them and use them on merchandise.  

Generative AI has the potential to open a Pandora’s Box of litigation. A class action suit has already been filed against OpenAI, Microsoft and Github alleging copyright violations by Copilot, Github’s AI-based code generator that uses OpenAI’s Codex model. The argument behind the suit is this: the tool uses hundreds of millions of lines of Open-Source code written, debugged, or improved by tens of thousands of programmers from around the world. While these individuals support the Open- Source concept, code generators like Copilot draw on their code (which was fed to it during its training) to generate code that may well be used for commercial purposes. The original authors of the code remain unrecognized and do not get any compensation.  

A similar situation can easily occur with art work created using AI-based tools because all that such tools need to create a digital image is a text prompt. For example, Polish artist Greg Rutkowski, known for creating fantasy landscapes, has complained about the fact that just typing a simple text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks quite close to his original work. The smarter text recognition and generative AI get, the simpler it will be for even lay people to use. Karla Ortiz, a San Francisco based illustrator is concerned at the potential loss of income that she and her fellow professionals might suffer due to generative AI.[1]

 Sooner than later, this challenge will be faced by playwrights, novelists, poets, photographers and pretty much all creative professionals. Indeed, AI tools could conceivably put writers out of business in the next few years! AI generators are “trained” using millions of poems, images, paintings etc that were created by persons dead or alive. Their creators or their legal heirs do not currently have the option to exclude their works from the training datasets. In fact, they do not even usually know that their works have been included.  

The creative industry itself is taking various steps to protect the rights of various categories of creative professionals. Such measures include the use of digital watermarking for authentication, banning the use of AI-generated images, and building tools that allow artists to check if their works have been used as part of any training datasets and then opt out if they so choose.  

A more pernicious problem could conceivably arise when deliberately or inadvertently, misleading content is created and posted- and consumed by innocent users. Some early examples of such misuse have already emerged, and there is a genuine concern that if these activities are not nipped in the bud and information on the internet is not somehow authenticated, serious, unexpected, and large-scale damage may be caused.  

Overhauling the Laws

In the US, AI tools may, for now, take legal cover under the fair use doctrine. But that applies only to non-commercial usage. Arguably, the current situation where researchers and companies building AI tools freely use massive datasets to “train” their tools violate the spirit of ownership and protection of IPR because these AI generators are also being used for commercial benefit. Also, as various lawsuits are already underway, changes to IPR and related laws will need to be made to explicitly enable AI. Not doing so will only impede the use of AI in various fields where such algorithms can deliver significant benefits by speeding up innovation.  



Image Credits:

Photo by Tara Winstead:

Like every technology, generative AI too has pros and cons. While it has made it easy to create various kinds of content at scale and in much shorter timeframes, the same technology has also been used to create “deep fakes” that then go viral on social media.  


Securing your Data with the Trade Marks Registry

Data privacy has been a cause of concern for individuals and corporates, however, when sharing personal information with government authorities, we tend to overlook this concern. Has one ever wondered how secure her confidential, proprietary, or personal information is while sharing it with a government agency like the Trade Marks Registry?

Indian Intellectual Property Offices come under the Ministry of Commerce and Industry; therefore, they are under the control of the Central Government. The Trade Marks Registry, established in 1940, primarily acts as a facilitator in matters relating to the registration of trademarks in India.

The Trade Marks Registry (TMR) is a public filing system. That means once a trademark application is filed with the TMR, a lot of information is placed on record, including the applicant’s and its representative’s personal data, such as mailing address, and the proof of use of the trademark. The digitization of the Registry in 2017 prompted the current practice of recording information on a public access system.


Fundamental Concerns

Mailing Address: Open and easy access to such personal information exposes an applicant to scams and other unwanted solicitations. For instance, scam emails (that appear to have been sent by the TMR seeking maintenance fees) from third parties attempt to deceive applicants into paying additional fees. Everyone recalls how anyone who filed an international application between 2005 and 2015 was duped by international scammers who obtained their information from the WIPO. By oversight, many people were duped into paying huge amounts of money.

If an attorney represents an applicant, the TMR does not send correspondence about the trademark application directly to the applicant. In such cases, the Registry directly communicates with their authorised attorneys. Hence, if an applicant receives any mail relating to their trademark, they should consult their attorneys, who may evaluate it to guarantee that a scam letter is not mistaken for real contact.

Documents to support the use of the mark: Applicants are frequently required to submit documentary evidence to support their applications and commercial use of their marks. Such evidence is often public, but an applicant might disclose information they would not intend to make public, such as bills, financial papers, reports, and other confidential information. There is no mechanism to have them masked or deleted from the TMR’s database if such information is uploaded or disclosed.


Initiatives by the Trade Mark Registry

In recent times, the TMR has adopted the practice of restricting public access to evidentiary documents submitted during opposition/rectification proceedings that the competing parties upload on the TMR. However, similar documents filed during any other stage, such as filing and pre-opposition prosecution, are still exposed to public access, even if they are documents or information relating to commercial confidence, trade secrets, and/or any other form of confidential, proprietary, or personal information.

However, the advantage of such an open and publicly available database is that it serves as a countrywide “notice,” which means that an alleged infringer of your trademark cannot claim ignorance of your brand. However, disclosure of such information exposes applicants to email scams and other unwanted solicitations and can also harm their competitive position in the market.

In September 2019, on account of various representations made by numerous stakeholders regarding the TMR’s display of confidential, proprietary, and personal information,[1] a public notice was issued by the Registry, inviting stakeholders’ comments on the aforesaid concerns.

The TMR proposed the classification of such documents into two categories:

  • Category I: Documents that are fully accessible and available for viewing or downloading by the public.
  • Category II: Documents for which details will be available in the document description column, but viewing and downloading will be restricted.


Roadblocks and Viable Course of Action

Notably, the Right to Information (RTI) Act, 2005, obligates public authorities to make information on their respective platforms available to the public in a convenient and easily accessible manner. There are some notable exceptions to this rule, i.e., information related to commercial confidence and trade secrets is exempted from being disclosed or made accessible to the public in so far as their disclosure leads to a competitive handicap for the disclosing party. Personal information is also exempted to the extent that its disclosure leads to an invasion of privacy or if it has no relation to public activity or interest.

Hence, it is crucial to understand that while such a classification, as has been suggested by the TMR above, might seem like a good initiative on the surface, the lack of any concrete boundaries assigned to the terms “confidential” or “personal” information leaves the Registry with unquestioned discretion to generalise datasets and to restrict access to documents on the TMR website. A simple example could be data collected by the TMR through pre-designated forms, including Form TM A, Form TM O, etc. Most of these forms generally mandate the submission of certain personal information, including the proprietor’s name, address, telephone number, etc. However, this cannot simply mean that the TMR denies the general public access to such trademark application forms, as this would defeat the primary goal of advertising such marks on the Registry, which is to seek any opposition or evidence against such marks. Thus, while the objective behind such a classification of documents might be well-intended, restriction of access to certain documents might lead to a conflict of interest for the TMR, and it might end up over-complicating the due-diligence processes, leading to increased costs and resources.

Such generalised classifications are, hence, only viable in theory. The TMR might end up entertaining hundreds of RTI applications if it decides to limit access to certain documents, which might be necessary for proper due diligence and prosecution. The free and open availability of documents enables the public to have smoother and easier access to essential records and credentials of the trademark proprietors, thereby allowing the masses to have a better understanding of the prosecution history of important trademarks of the target company.

In the long run, a rather sustainable alternative for the TMR might be introducing a multi-factor authentication system for the parties interested in carrying out due diligence or prosecution against a mark. A multi-factor authentication system for gaining access to the records and documents on the Registry might lengthen the entire process in the short run. Nonetheless, the move could be game changer in the long run because it would allow the Registry to restrict access to confidential and personal data of its users to parties with an original or vested interest in the registration of a mark.

Such an approach would not only enable the Registry to provide open and efficient access to necessary documents to the parties who have an original or vested interest in the registration of a mark, but it would simultaneously vest it with the flexibility to protect the sensitive, confidential, as well as personal data of its users from scammers or non-interested parties.



A Privacy-by-Design approach is the future of the modern-day web, and as long as the Registry does not implement more elaborate internal safeguards on its website and databases to protect the privacy and integrity of public data contained therein, it is always recommended that applicants work with an experienced trademark attorney who can assist applicants in reducing the exposure of their information to individuals or a class of individuals with ulterior motives and mitigating the harm associated with the usage of their data.


[1] Public Notice dated 06/09/2019 re Categorization of Documents on the TMR. Accessible at:

The Trade Marks Registry (TMR) is a public filing system. That means once a trademark application is filed with the TMR, a lot of information is placed on record, including the applicant’s and its representative’s personal data, such as mailing address and the proof of use of the trademark. 


Non-Personal Data Governance Framework, 2020

The realm of the internet has become an information powerhouse and data has become the new endowment of resources that governments and corporate entities are eager to tap into. The transformation in the digital environment and the emergence of information-intensive services has made data a necessary raw material for most undertakings.

Reports suggest that every minute Instagram is flooded with 277,000 stories, Google has 4.4 million searches and Uber has over 9700 rides in 2019. Today, data is an asset to various businesses and holds importance while making investments, mergers, and acquisitions, and/ or direct monetization.


While the discussion on ‘personal data’ has been revolving around privacy and security concerns, non-personal data is being eyed as an economic opportunity to augment public or private interest which must not be squandered. Considering the value proposition attributed to non-personal data, the legal aspect was sought to be dealt separately from ‘personal data’ which would be governed by the Personal Data Protection Bill, 2019 that is in the brink of finalization.


Consequently, an Expert Committee (“Committee“) was constituted by the Ministry of Electronics and Information Technology (“MeitY“) to study various issues relating to non-personal data. The Committee submitted its Report on Non-personal Data Governance Framework for comments from stakeholders in July 2020.


The report highlighted that data regulation is essential to utilize the maximum potential in data by realizing its economic, social, and public value. The need to regulate data stems from the imbalances in bargaining power between the companies that lead to the creation of data monopolies. Moreover, the privacy concerns revolving around the dilution of shared data must be tackled.


Non-Personal Data (“NPD“) is the data that cannot be identified with a particular individual, for example, weather forecast, traffic details, geospatial information, production processes, anonymized personal data, etc.


  1. Committee’s Proposal to Non-Personal Data Regulation


The NPD Governance Framework outlines norms for collection of data and data sharing by entities. The salient features of the proposed framework are:


  • The NPD framework provides key roles for all the participants such as Data Principal, Data Custodian, Data Trustees and Data Trusts.
  • Classification of NPD: Non-personal Data is further classified into Public NPD, Community NPD and Private NPD. Public NPD is NPD that is collected or generated by the government or by the agency of the government and includes data collected or generated in the course of execution of all publicly funded works (e.g. public health information, vehicle registration, etc.) excluding the one that is explicitly declared as confidential under the law. Community NPD is data about inanimate or animate phenomenon about a particular community of natural persons (e.g. data collected by e-commerce platforms or by telecom). Private NPD is NPD collected or produced by non-governmental entities or persons.
    • Ownership of non-personal data: In cases wherein, non-personal data is derived from personal data of an individual, the data principal for personal data will be the data principal for the NPD too. Further, the rights over the community NPD collected in India will vest in the trustee of such a community.
    • Sensitivity of NPD: The Committee has also defined a new concept of ‘sensitivity of NPD’, as NPD can also be sensitive from the perspective of: a) national security or strategic interests; b) sensitive or confidential information relating to businesses; and c) anonymized data, that bears a risk of re-identification.
    • Data Businesses and data disclosures: There is also the creation of a new horizontal classification called ‘Data Business’ which is when any existing business collects data beyond a threshold level. Such Data Businesses have to get themselves registered and furnish information on what they do/ collect, their purpose, and the nature of data stored. However, registration of Data Businesses collecting data below the threshold is not mandatory.
    • Non-Personal Data Regulatory Authority: NPD Regulatory Authority shall ensure that data is shared for sovereign, social and economic welfare, for regulatory and competition purposes, and also that all stakeholders adhere to the rules and data sharing requirements.
  1. Unanswered Questions: Shortcomings of the proposed Framework:


Attempting to govern the NPD is a commendable effort, however, it seems that there is a slew of questions that are left unanswered. The following are the issues relating to the proposed framework:


  • The foremost need to govern NPD as highlighted by the Committee is the imbalance in the digital ecosystem. However, neither the sources of these imbalances have been identified or analysed nor has it been clarified how the proposed regulations resolve these inequities.
  • Ambiguous classification of NPD: The various types of NPD have a potential overlap, but then again, clearly demarcating a line between the three types would be a difficult task. Also, one of the three types of NPD is Community NPD, however, there is no clarification as to how the ‘community’ would be determined. The definition of ‘community’ is wide, under the same even religious groups, residents of the same locality or same educational background would be a valid community, which may have conflicting interests over data shared with the government. Further, without any guiding principles, companies will be forced to make legally binding decisions on what they deem to be a valid community, the scope of data to be shared and for the resolution of competing claims, which is problematic at various levels. Moreover, on a particular dataset, there could be various interests, and in such cases, who would be entrusted with the data remains ambiguous.
  • Anonymization of Personal Data to Non-Personal Data: The process of converting personal data into Non-Personal Data by removing certain identifiers or credentials is termed as ‘anonymization’. Anonymization would undoubtedly convert a set of personal data into non-personal data but, such data runs the risks of re-identification. Further, although anonymization is essential, high anonymization could render the data over-generalized and futile.
  • Reactions of Stakeholders to the sharing of data: Mandatory data sharing is highly criticized by stakeholders, as it undermines the investments put in business and the value of intellectual property information the competitors would suffer. This ‘forced data sharing’ is counterproductive and would have a rather negative effect on foreign trade and investments. NPD can constitute trade secrets, that may be protected by IP laws, sharing this data raises concerns around the right to carry business and India’s obligation under international trade law. The purposes for data sharing under the framework are ‘sovereign’, ‘core public interest’, and ‘economic’ purposes which essentially covers all the data held by companies, and must be narrowed down.
  • Lack of Clarity on who really are trustees of Data: There is ambiguity regarding who will be a data trustee. Whether private, for-profit organizations or private entities within the government could be data trustees is not apparent. Also, the position regarding a data trustee’s independence and conflict of interest remains murky. It is essential that the roles and functions of these bodies are comprehensively defined.
  • User-Consent: NPD Framework also proposes that before the anonymization of data the consent of the user must be taken. It remains particularly unclear as to how would the consent be taken from them. Further, a company needs to invest in resources and obtain user consent, and sharing data may provide no incentive to such companies and would drown them into losses.
  • Over-Regulation by Non-Personal Data Authority: Creating altogether a new authority for NPD would lead to potential regulatory overlap given Data Protection Authority addresses and enforces privacy concerns and the Competition Commission of India looks over consumer welfare.
  1. Conclusion

This effort of the Ministry to set up a Committee to study the NPD which may subsequently lead to a legislation governing the NPD in India is praiseworthy, however, a lot of issues need reconsideration. Stakeholders have expressed anguish over the mandatory sharing of data and data disclosures as it conveniently overlooks the humungous investments put in by the companies. Further, the roles and functions of various entities under the framework are not clearly defined. The NPDA established under the framework may have functional overlaps with the CCI and the Data Protection Authority.


Moreover, there is ambiguity regarding Community NPD and user consent. There is no doubt that the ever-evolving nature of information technology is demanding as far as regulatory mechanism is concerned therefore the road ahead is arduous. Hopefully, the concerns raised are adequately addressed by the Committee and constructively resolved in favour of all the stakeholders.

Photo by Franki Chamaki on Unsplash

This effort of the Ministry to set up a Committee to study the NPD which may subsequently lead to legislation governing the NPD in India is praiseworthy, however, a lot of issues need reconsideration. Stakeholders have expressed anguish over the mandatory sharing of data and data disclosures as it outrightly overlooks the humungous investments put in by the companies.


Core Legal Issues with Artificial Intelligence in India

The adoption and penetration of Artificial Intelligence in our lives today does not necessitate any more enunciation or illustration. While the technology is still considered to be in its infancy by many, so profound has been its presence that we do not comprehend our reliance on it unless it is specifically pointed out. From Siri, Alexa to Amazon and Netflix, there is hardly any sector that has remained untouched by Artificial Intelligence.

Thus, the adoption of artificial intelligence is not the challenge but its ‘regulation’ is a slippery slope. Which leads us to questions such as whether we need to regulate artificial intelligence at all? If yes, do we need a separate regulatory framework or are the existing laws enough to regulate artificial intelligence technology?

Artificial intelligence goes beyond normal computer programs and technological functions by incorporating the intrinsic human ability to apply knowledge and skills and learning as well as improving with time. This makes them human-like. Since humans have rights and obligations, shouldn’t human-likes have them too?

But at this point in time, there have been no regulations or adjudications by the Courts acknowledging the legal status of artificial intelligence. Defining the legal status of AI machines would be the first cogent step in the framing of laws governing artificial intelligence and might even help with the application of existing laws.

A pertinent step in the direction of having a structured framework was taken by the Ministry of Industry and commerce when they set up an 18 member task force in 2017 to highlight and address the concerns and challenges in the adoption of artificial intelligence and facilitate the growth of such technology in India. The Task Force came up with a report in March 2018[1] in which they provided recommendations for the steps to be taken in the formulation of a policy.

The Report identified ten sectors which have the greatest potential to benefit from the adoption of artificial intelligence and also cater to the development of artificial intelligence-based technologies. The report also highlighted the major challenges which the implementation of artificial intelligence might face when done on large scale, namely (i) Encouraging data collection, archiving and availability with adequate safeguards, possibly via data marketplaces/exchanges; (ii) Ensuring data security, protection, privacy and ethical via regulatory and technological frameworks; (iii) Digitization of systems and processes with IoT systems whilst providing adequate protection from cyber-attacks; and (iv) Deployment of autonomous products and mitigation of impact on employment and safety.[2]

The Task Force also suggested setting up of an “Inter–Ministerial National Artificial Intelligence Mission”, for a period of 5 years, with funding of around INR 1200 Crores, to act as a nodal agency to coordinate all AI-related activities in India.


Core Legal Issues

When we look at the adoption of artificial intelligence from a legal and regulatory point of view, the main issue we need to consider is, are the existing laws sufficient to address the legal issues which might arise or do we need a new set of laws to regulate the artificial intelligence technologies. Whilst certain aspects like intellectual property rights and use of data to develop artificial intelligence might be covered under the existing laws, there are some legal issues which might need a new set of regulation to overlook the artificial intelligence technology.


  • Liability of Artificial Intelligence


The current legal regime does not have a framework where a robot or an artificial intelligence program might be held liable or accountable in case a third party suffers any damage due to any act or omission by the program. For instance, let us consider a situation where a self-driven car controlled via an artificial intelligence program gets into an accident. How will the liability be apportioned in such a scenario?

The more complex the artificial intelligence program, the harder it will be to apply simple rules of liability on them. The issue of apportionment of liability will also arise when the cause of harm cannot be traced back to any human element, or where any act or omission by the artificial intelligence technology which has caused damage could have been avoided by human intervention.

One more instance where the current legal regime may not be able to help is where the artificial intelligence enters into a contractual obligation after negotiating the terms and conditions of the contract and subsequently there is a breach of contract.

In the judicial pronouncement of United States v Athlone Indus Inc[3] it was held by the court that since robots and artificial intelligence programs are not natural or legal persons, they cannot be held liable even if any devastating damage may be caused. This traditional rule may need reconsideration with the adoption of highly intelligent technology.

The pertinent legal question here is what kind of rules, regulations and laws will govern these situations and who is to decide it, where the fact is that artificial intelligence entities are not considered to be subject of law.[4]


  • Personhood of Artificial Intelligence Entities


From a legal point of view, personhood of an entity is an extremely important factor to assign rights and obligations. Personhood can either be natural or legal. Attribution of personhood is important from the point of view that it would help identify as to who would ultimately be bearing the consequences of an act or omission.

Artificial intelligence entities, to have any rights or obligations should be assigned personhood to avoid any legal loopholes. “Electronic personhood”[5] could be attributed to such entities in situations where they interact independently with third parties and take autonomous decisions.


  • Protection of Privacy and Data

For the development of better artificial intelligence technologies, the free flow of data is crucial as it is the main fuel on which these technologies run. Thus, artificial intelligence technologies must be developed in such a way that they comply with the existing laws of privacy, confidentiality, anonymity and other data protection framework in place. There must be regulations which ensure that there is no misuse of personal data or security breach. There should be mechanisms that enable users to stop processing their personal data and to invoke the right to be forgotten. It further remains to be seen whether the current data protection/security obligations should be imposed on AI and other similar automated decision-making entities to preserve individual’s right to privacy which was declared as a fundamental right by the Hon’ble Supreme Court in KS Puttaswamy & Anr. v Union of India and Ors[6]. This also calls for an all-inclusive data privacy regime which would apply to both private and public sector and would govern the protection of data, including data used in developing artificial intelligence. Similarly, surveillance laws also would need a revisiting for circumstances which include the use of fingerprints or facial recognition through artificial intelligence and machine learning technologies.

At this point in time there are a lot of loose ends to be tied up like the rights and responsibilities of the person who controls the data for developing artificial intelligence or the rights of the data subjects whose data is being used to develop such technologies. The double-edged sword situation between development of artificial intelligence and the access of data for further additional purposes also needs to be deliberated upon.

Concluding Remarks

In this evolving world of technology with the capabilities of autonomous decision making, it is inevitable that the implementation of such technology will have legal implications. There is a need for a legal definition of artificial intelligence entities in judicial terms to ensure regulatory transparency. While addressing the legal issues, it is important that there is a balance between the protection of rights of individuals and the need to ensure consistent technological growth. Proper regulations would also ensure that broad ethical standards are adhered to. The established legal principles would not only help in the development of the sector but will also ensure that there are proper safeguards in place.

In this evolving world of technology with the capabilities of autonomous decision making, it is inevitable that the implementation of such technology will have legal implications. There is a need for a legal definition of artificial intelligence entities in judicial terms to ensure regulatory transparency. While addressing the legal issues, it is important that there is a balance between the protection of rights of individuals and the need to ensure consistent technological growth.