Transcribing Court Proceedings with AI Technology: An Analysis

The Supreme Court of India has recently come up with the decision to make use of AI-powered natural language processing technologies in transcribing court proceedings. The idea is to capture what people inside the court were speaking and convert it from speech to text.

This intelligent automation will speed up the process of creating transcripts that are later made available to various stakeholders such as lawyers, parties involved in the concerned case, etc. Quicker access to transcripts will benefit lawyers, especially during multi-day hearings. This AI solution, named – Technology Enabled Resolution (“TERES”), developed by Bangalore-based Nomology Technology Private Limited, was already being used for transcribing arbitration matters and has proved its value because specialist transcribers often had to be hired from abroad, adding significantly to the overall cost borne by the parties.

With AI already permeating much of human society, it was only a matter of time before the judiciary also adopted it in the sphere of litigation. Nonetheless, the decision to move ahead with the experiment is laudable and especially so in light of the apex court’s recent decision allowing live-streaming of certain hearings (i.e., of only those hearings that relate to the interpretation of our constitution and where the bench comprises five or more judges).

These two steps will force all stakeholders in our legal ecosystem to change their mindsets, ways of working, and in-court behaviour. Over time, one hopes that such changes will collectively yield various benefits some of which are listed below:

  • Improved justice delivery system (based on better arguments and more efficient access to and assessment of evidence).
  • Reduced pendency of cases (based on faster disposal of matters as well as a decline in the tendency to approach courts for frivolous matters).
  • Easier and cost-effective access to legal recourse for larger sections of our society.
  • Minimised use of time-wasting tactics (e.g., needless adjournments).
  • Higher standards of courtcraft and a better understanding of the context in which certain comments are made by the bench or the bar.
  • Better recordkeeping.
  • More accountability.
  • Enhance learning for newer generations of lawyers.

For years, the government has sought to make India a preferred centre for international arbitration and mediation. Progress on the ground has however been slow. The wider use of modern technologies may prove to be a catalyst in this regard. This can also be a boost to improve the ease of doing business in India, especially at a time when, for various geopolitical and economic reasons, foreign direct investment is on the rise. At the same time, the burgeoning start-up scenario in India is attracting significant private equity and venture capital. A lot of intellectual property is being created in India and needs to be suitably protected. Especially because much of it has to do with emerging areas that are critical to the future of our country and indeed, the world.

To be sure, there will still be various practical challenges that need to be ironed out. As pointed out by the Hon’ble Chief Justice of India, Dhananjaya Y Chandrachud, multiple voices at the same time may well confuse the AI tool and hinder accurate transcription. Different accents and loudness of voices may also potentially complicate matters. Also, unless the entire judiciary (across all courts) adopts such technologies, the benefits will be limited. Such widespread adoption may still be derailed by objections from various quarters.

Therefore, it is too early to conclude with any level of certainty that the above-mentioned benefits (and possibly, others) will indeed be realized, and if yes, how long it will take. However, the Supreme Court’s decision to use technologies to usher in greater levels of efficiency and transparency is a clear signal of intent. As the old saying so presciently reminds us, even a journey of a thousand miles begins with the first step. That step has been taken.

Image Credits:

Photo by Tara Winstead: https://www.pexels.com/photo/clear-mannequin-on-dark-blue-background-8386365/

For years, the government has sought to make India a preferred centre for international arbitration and mediation. Progress on the ground has however been slow. The wider use of modern technologies may prove to be a catalyst in this regard. This can also be a boost to improve the ease of doing business in India, especially at a time when, for various geopolitical and economic reasons, foreign direct investment is on the rise. At the same time, the burgeoning start-up scenario in India is attracting significant private equity and venture capital. A lot of intellectual property is being created in India and needs to be suitably protected. Especially because much of it has to do with emerging areas that are critical to the future of our country and indeed, the world.

POST A COMMENT

The Curious Case of the Robolawyer (No, it's not a Perry Mason Novel!)

With the advent of technology, there is a drastic increase in the use of AI (Artificial Intelligence) which has significantly altered the way technology is perceived and will have a far-reaching impact in the future. Hence, it becomes necessary to try to minimize its shortcomings and make prudent use of the technology.

I do not know how many of you have heard of Joshua Browder, the 26-year-old founder of DoNotPay, a US-based venture that has developed a “robolawyer”- essentially an AI-powered bot that helps users in use cases such as appealing vehicle parking tickets, negotiating airline ticket refunds, and contesting service provider bills. Although the app was first released in 2015, to be honest, until recently, I too had not heard of him or the app!

My curiosity was piqued when I recently read the news that his company is willing to pay a million US dollars to any person or lawyer willing to repeat verbatim in front of the Supreme Court judge all that their robolawyer asks them to. It remains to be seen whether someone will take Josh up on that offer, whether the US Supreme Court will grant permission and what the outcome will be. However, it is being reported in the media that the DoNotPay app will help two defendants argue speeding tickets in US courts next month. The company has promised to pay the fines on behalf of the users if the robolawyer loses their appeals.

The app runs on the AI model known as “Generative Pre-trained Transformer” or GPT. This is the same technology that runs ChatGPT, which reportedly hit a million users in less than a week of its launch. AI technologies are constantly improving, and there is now greater emphasis on “ethics” and “explainability.” Essentially, the software must be able to explain how it arrived at a certain conclusion or output. This is important to minimize, if not altogether eliminate, the risk of biases and prejudices that creep into AI software simply because it is trained using hundreds of millions of content elements on the web (articles, images, reports, videos, etc.) that were all created by humans, and as such, carry the individual beliefs, prejudices, convictions, etc. of their original creators.

Over the coming decades, AI will shake up many fields including legal practice, healthcare, finance, etc. Not all fields will be impacted at the same pace or to the same extent but change they will. Already, AI is being used by healthcare professionals in improving the efficacy of diagnosis and confirmation of lines of treatment. Law firms too are beginning to use AI to simplify the tedium of the process of trawling through case laws and legal judgments to identify precedents and the reasoning of the benches involved. Soon, lawyers will simply be able to type in questions into ChatGPT, which will provide well-reasoned answers in a matter of minutes. Of course, the real skill will be to ask the right questions and figure out how sensible the answers are, and decide on further courses of action. Think of it as an advocate briefing a senior lawyer before the latter argues in court.

Half-baked knowledge is dangerous. For many years, patients (and/or caregivers) have used search engines to find information about symptoms, diagnostic tests, and lines of treatment and then argue with qualified medical professionals about their choices, at times forcing doctors to explain their hypotheses and reasoning. It is quite likely that in the foreseeable future, clients of lawyers and law firms too will be tempted to adopt a similar approach, which means lawyers too will end up spending time and effort on educating clients on matters of law and jurisprudence. Maybe it is worth coming up with new pricing models to dissuade frivolous “brainstorming” and “legal strategy” sessions!

Note to myself: Try out ChatGPT to explore the kind of responses it provides and start preparing for a future that will undoubtedly be more closely linked with AI tools.

References:

[1] Design Application Numbers 274917, 274918, 284680, 276736, 260403

[2] 24 U.S.P.Q.2d (BNA) 1614 (BPAI Apr. 2, 1992)

[3] Apple, Inc. v. Samsung Elecs. Co., 926 F. Supp. 2d 1100 (N.D. Cal. 2013) (partially affirming jury damages award).

[4] US6763497B1

[5] US10915243B2

Image Credits:

Photo by cottonbro studio: https://www.pexels.com/photo/person-using-macbook-3584994/

Over the coming decades, AI will shake up many fields including legal practice, healthcare, finance, etc. Not all fields will be impacted at the same pace or to the same extent but change they will. Already, AI is being used by healthcare professionals in improving the efficacy of diagnosis and confirmation of lines of treatment. Law firms too are beginning to use AI to simplify the tedium of the process of trawling through case laws and legal judgments to identify precedents and the reasoning of the benches involved.

POST A COMMENT

Generative AI: Generating Legal Headaches?

The year 2022 saw major breakthroughs in the field of generative Artificial Intelligence. This field is different from the more traditional “discriminatory” AI models, whose algorithms rely on the datasets they are fed during “training” to make decisions. By contrast, “generative” AI models are forced to make conclusions and draw inferences from datasets based on a limited number of parameters given to them during training. In other words, generative AI uses “unsupervised” learning algorithms to create synthetic data. The output of generative AI includes digital images and videos, audio, text or even programming code. In recent days, even poetry, stories, blog posts and art work have been created by AI tools 

Generative AI: The Socio-Economic and Legal Problems

Like every technology, generative AI too has pros and cons. While it has made it easy to create various kinds of content at scale and in much shorter timeframes, the same technology has also been used to create “deep fakes” that then go viral on social media.  

OpenAI’s image generator platform “DALL-E 2” and automatic text generator GPT-3 have already been used to create art work and other text-based content. GPT-4, which is expected to be far more powerful and advanced, is expected to be released in 2023. Until recently, Open AI did not allow commercial usage of images created using the platform. But it has now begun to grant “full usage rights”- which includes the rights to sell the images, reprint them and use them on merchandise.  

Generative AI has the potential to open a Pandora’s Box of litigation. A class action suit has already been filed against OpenAI, Microsoft and Github alleging copyright violations by Copilot, Github’s AI-based code generator that uses OpenAI’s Codex model. The argument behind the suit is this: the tool uses hundreds of millions of lines of Open-Source code written, debugged, or improved by tens of thousands of programmers from around the world. While these individuals support the Open- Source concept, code generators like Copilot draw on their code (which was fed to it during its training) to generate code that may well be used for commercial purposes. The original authors of the code remain unrecognized and do not get any compensation.  

A similar situation can easily occur with art work created using AI-based tools because all that such tools need to create a digital image is a text prompt. For example, Polish artist Greg Rutkowski, known for creating fantasy landscapes, has complained about the fact that just typing a simple text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks quite close to his original work. The smarter text recognition and generative AI get, the simpler it will be for even lay people to use. Karla Ortiz, a San Francisco based illustrator is concerned at the potential loss of income that she and her fellow professionals might suffer due to generative AI.[1]

 Sooner than later, this challenge will be faced by playwrights, novelists, poets, photographers and pretty much all creative professionals. Indeed, AI tools could conceivably put writers out of business in the next few years! AI generators are “trained” using millions of poems, images, paintings etc that were created by persons dead or alive. Their creators or their legal heirs do not currently have the option to exclude their works from the training datasets. In fact, they do not even usually know that their works have been included.  

The creative industry itself is taking various steps to protect the rights of various categories of creative professionals. Such measures include the use of digital watermarking for authentication, banning the use of AI-generated images, and building tools that allow artists to check if their works have been used as part of any training datasets and then opt out if they so choose.  

A more pernicious problem could conceivably arise when deliberately or inadvertently, misleading content is created and posted- and consumed by innocent users. Some early examples of such misuse have already emerged, and there is a genuine concern that if these activities are not nipped in the bud and information on the internet is not somehow authenticated, serious, unexpected, and large-scale damage may be caused.  

Overhauling the Laws

In the US, AI tools may, for now, take legal cover under the fair use doctrine. But that applies only to non-commercial usage. Arguably, the current situation where researchers and companies building AI tools freely use massive datasets to “train” their tools violate the spirit of ownership and protection of IPR because these AI generators are also being used for commercial benefit. Also, as various lawsuits are already underway, changes to IPR and related laws will need to be made to explicitly enable AI. Not doing so will only impede the use of AI in various fields where such algorithms can deliver significant benefits by speeding up innovation.  

References:

[1] https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/

Image Credits:

Photo by Tara Winstead: https://www.pexels.com/photo/robot-fingers-on-blue-background-8386369/

Like every technology, generative AI too has pros and cons. While it has made it easy to create various kinds of content at scale and in much shorter timeframes, the same technology has also been used to create “deep fakes” that then go viral on social media.  

POST A COMMENT

AI Adoption: Behooves Heightened Responsibility & Higher Ethics

In July 2022, UK-based Artificial Intelligence (AI) firm Peak commissioned a benchmarking survey to study AI adoption in the USA, UK, and India. The study, jointly conducted by the Centre for Economics and Business Research, included 3000 senior decision-makers from companies with at least 100 employees; the survey was augmented by responses from 3000 middle-level staff as well.

A key finding was the inaugural Decision Intelligence (DI) Maturity Index, an indicator of how prepared businesses in these three jurisdictions were to adopt AI for commercial decision-making. The study found that over the past six years, the percentage of companies that have adopted AI technologies stood at 28%, 20% and 25% in the US, UK, and India respectively. While it was only expected that the US would be the leader, it was surprising that when it comes to leveraging AI in commercial areas, Indian companies ranked highest- they scored 64 (out of 100), while those in the US and UK respectively scored 52 and 44. 

The study also found that 18 % of US workers were unsure whether the companies they work for used AI at all; for India this figure stood at 2%. It was also revealing that Indian enterprises embedded data sciences capabilities within commercial teams, while their western counterparts relied more on central data teams[1]. Of course, it must be acknowledged that China is perhaps much further ahead in terms of deploying AI, although we will likely not get to know the details anytime soon.

 

AI will play a major role in how our world evolves

 

Consumers like you and me already experience the power of AI in the form of reminders from fitness apps or what books to read, shows to watch or music to listen etc. Intelligent parking assistance in some cars is another example of AI in action. AI is also at work when we see “deep fake” videos that look and sound so real. AI is not a new field; it has in fact been around since the late 1950s, which is when the term was coined. But it is only in recent years that AI has become less esoteric and more mainstream. 

This shift is due to rapid advances in computing power and speeds as also evolution of models and capabilities around natural language processing, voice recognition, machine vision and other allied areas. It is this pace and nature of AI evolution that gives experts the confidence that AI will play a key role in economic and social development, delivery of education and healthcare services, forecasting natural disasters and managing them, national security and much more.

Several national flagship infrastructure backbones in India, including the GST and Income Tax systems, Open Network for Digital Commerce (ONDC), Government e-Marketplace (GeM), the Unified Logistics Interface Platform (ULIP) and the Gati Shakti National Master Plan already have elements of AI embedded in them. India’s private sector too, has been actively working on AI-based projects and products that span different use cases and industry sectors.

 

India is taking steps to prevent unbridled use of AI- but “there are miles to go before we sleep”

 

A couple of decades ago, movie franchises such as “The Matrix” and “The Terminator” conjured up a world where machines take over the world. Today, the world is closer to being at a stage where inadvertent or deliberate misuse of AI can unleash unknowable harm to society. It can be argued that human avarice has already damaged our planet beyond redemption, but we have done that without much help from AI!

There have already been instances reported in media where the use of AI in some applications has thrown up evidence of discrimination and bias-negative traits that are patently human. The companies behind these applications have rolled them back but they signal a clear and present danger. There has also been much debate in recent times about whether AI-based programs are truly “sentient” i.e., capable of feelings. Maybe we are still some years away from truly sentient machines- or maybe they are already here. Either way, it is important to ensure that AI is governed by appropriate ethics to make it “responsible.”

Clearly, AI has great power; it must therefore also be used with great responsibility. “Responsible AI” has many dimensions, including reliability, safety, privacy, transparency, fairness, and accountability. Just as important is for humans to know how an AI system arrived at a certain conclusion or decision. While most of the above have to do with how AI powered devices and applications are designed and built, it is also critical to ensure that ethics govern how these devices and apps are deployed and what they are used for. 

In the absence of such mechanisms (and punitive actions for violators), think of the myriad privacy incursions that can be easily caused by physical surveillance using drones or digital eavesdropping of phone conversations. Even AI-powered software in place to analyze CVs to identify the “best” candidates can be misused to ensure that only candidates of a certain profile are hired.

AI ethics and governance needs to cover more than just individual companies that develop AI tools and applications. All stakeholders must work together to put in place an overarching framework that includes policies, laws, rules, and SOPs to ensure that AI does not become a Pandora’s Box. A key objective must be to ensure that there is mutual trust.

To support India’s burgeoning AI ecosystem, the Niti Ayog has begun to hold consultative discussions. Its report “AI for All” is grounded in the fundamental rights enshrined in India’s constitution. It suggests setting up of an expert committee comprising specialists in AI, cybersecurity, social scientists, law, various industry domains and representatives of government and civil society to create a regulatory/governance framework. 

Such a framework must necessarily be flexible, to accommodate unexpected changes powered by technological innovations. NASSCOM, India’s software industry association, has launched a Responsible AI hub to ensure that key stakeholders are engaged so that broader societal views are considered and factored into strategies and plans related to not just innovations, development, and deployment but also governance.

A survey by IBM Institute for Business Value has found that the responsibility for leading and upholding ethics has shifted to the CEO. 62% of business leaders agree that AI ethics is important to their organizations. It is a given that the world will never be a utopia. It is time that “leaders” in every field from around the world stand up and take necessary steps to prevent the world from becoming an AI-powered dystopia. AI is too important a domain to be left to the whims and fancies of individual countries, companies, or leaders- whether democratic, despotic, megalomaniac, idealistic or somewhere in between.

AI ethics and governance needs to cover more than just individual companies that develop AI tools and applications. All stakeholders must work together to put in place an overarching framework that includes policies, laws, rules, and SOPs to ensure that AI does not become a Pandora’s Box. The key objective must be to ensure that there is mutual trust.

POST A COMMENT

Demystifying the Inventorship Rights of an AI System in India

In this age of technological advancement, Artificial Intelligence (AI) has taken a giant leap from undertaking more straightforward tasks to originating marvellous inventions. Can an AI system be considered an inventor? This question has been beguiling jurisprudence across the globe for a considerable time. However, through the recent decision of Thaler v. Commissioner of Patents, the Australian Federal Court has forced jurisdictions across the world to re-think the inventive capacity and the role of AI in the contemporary ecosystem of innovation.

Through this article, we have tried to determine the implications of the Thaler decision and examine the position of the Indian legislation on the inventorship rights of an AI.

Factual Matrix

Dr. Stephen Thaler designed the Device for Autonomous Bootstrapping of Unified Sentience (DABUS). DABUS is an artificial intelligence system that pioneered the creation of an optimised beverage container and a flashing light for use in emergency circumstances. In the persistence of such a creation, Dr. Thaler filled patent applications worldwide, including in Australia, Canada, China, Europe, Germany, India, Israel, Japan, South Africa, the United Kingdom, and the United States.

“The Deputy Commissioner” rejected Dr. Thaler’s patent application in Australia, which named DABUS as the inventor. The matter was contested and finally, the Federal Court of Australia determined that the AI could be recognised as an inventor under the Australian Patent Act. According to the Court, the patent would be owned by Dr. Thaler, the developer, owner, and controller of DABUS. The Court determined that the legislative intent was to encourage innovation and that nothing in the Patent Act expressly or implicitly forbids AI from being named as an inventor.

Indian Stance: Inventorship Rights of an AI

In India, recently, the Controller General of Patents recorded objections to recognising an AI as an inventor in the matter of patent application numbered 202017019068, citing the provisions under Section 2 and Section 6 of the Patents Act 1970 (“Act”). The term “inventor” has not been defined under the Act. However, Section 6 states that, among other things, a patent application can be filed by any person claiming to be the true and first inventor of an invention.[1]

A bare reading of the provisions indicates that a natural person is distinguished from others. One can also observe that anyone other than a natural person will be unable to claim inventorship. Consequently, a natural person who is true and first to invent, and who contributes his originality, skill, or technical knowledge to the innovation meets the criteria to be acknowledged as an inventor in India.

In the case of V.B. Mohammed Ibrahim v. Alfred Schafranek, AIR 1960 Mysore 173, it was held that a financing partner cannot be an inventor, nor can a corporation be the sole applicant that claims to be an inventor. The Court, through this decision, emphasised that only a natural person (who is neither a financing partner nor a corporation) who genuinely contributes their skill or technical knowledge towards the invention shall qualify to claim inventorship under the Act.

In the light of this judgement, it can be perceived that an AI can also contribute its skill or technical knowledge to an invention and become an inventor. However, a reference to Som Prakash Rekhi vs Union of India & Anr, AIR 1981 SC 212, clarifies the qualification of a legal ‘person’ under Indian law. The Supreme Court observed that ‘personality’ is the sole attribution of a legal person. Such a ‘personality’ is an entity that has the right to sue or can be sued by another entity. An AI is not capable of using such rights, nor can it perform the required duties of any juristic personality independently. For instance, it cannot enter into an agreement or transfer or acquire patent/patent application rights. It would also be impossible for an AI to oppose or revoke a patent application. Hence, an AI falls short of the standards for being deemed an inventor in India.

Furthermore, the legislative intent behind the Indian Patent Act as found in the Ayyangar Committee report of 1959[2] suggests that inventors are mentioned in a patent application as a matter of right. Whether or not the actual deviser has a proprietary claim on the innovation, he has a moral right to be acknowledged as the inventor. This confers reputation and boosts the economic worth of the inventor. The inventor may give up his ownership interest in a particular patent due to a contract/agreement in law, but he retains his moral right.

An examination of legislative purpose and current public policy reveals a desire to protect the rights of the inventor/natural person who creates IP and can use his moral rights. On the other hand, AI cannot be granted moral rights nor appear to enjoy the benefits intended by legislation or public policy. Given this, designating AI as an inventor/co-inventor under current Indian rules seems impossible until explicit revisions are made.

Role of AI and Economic Growth in India

The Parliamentary Standing Committee “(“Committee“”) constituted under the Dept. of Commerce, analysed the current landscape of the IPR regime in India and observed its contribution to promoting innovation and entrepreneurship in the country in its report titled “Report 161: Review of the Intellectual Property Rights Regime in India” presented in the Rajya Sabha on  July 23rd, 2021. In particular, it examined the challenges that exist in the current legislative structure including the inventorship rights of an AI.

The Committee acknowledged the relevance and utility of AI-based cutting edge technology and machine learning, particularly in current times, significantly affected by the pandemic, in which digital technology proved to be instrumental in responding to the global crisis. Further, the Committee placed reliance on a report released by Accenture titled “How AI Boosts Industry Profits and Innovation” which estimated AI to inject US $ 957 Billion into the Indian Economy by 2035, if used optimally, to understand further the impact and role of AI and technology in the contemporary landscape and its relationship with Intellectual Property. 

Therefore, the Committee recommended a review of the relevant provisions of the Indian Patents Act, 1970 [Section 3(k)] and the Copyrights Act, 1957 on a priority basis to afford inventorship rights to AI in India. The Report also stated that “The Committee recommends the Department that the approach in linking the mathematical methods or algorithms to a tangible technical device or a practical application should be adopted in India for facilitating their patents as being done in the EU and U.S. Hence, the conversion of mathematical methods and algorithms to a process in this way would make it easier to protect them as patents“. Thereby including algorithms and mathematical processes under the ambit of patent law.

The Committee concluded that the legislative framework amendments would protect the works of an AI (either autonomously or with assistance/inputs from a human), incentivize pioneering inventions and R&D in the country, and maintain an enabling ecosystem for the protection of human intelligence innovations. The Committee maintained that the embargo placed on the inventorship rights of an AI would dissuade significant investments in the sector since such AI induced innovations would not be protected in the country.

Conclusion:  A Way Forward for Inventorship Rights of an AI System 

The decision would have a favourable impact on the holder of an AI. However, commentators have expressed concerns regarding the difficulties that may arise due to the extending of patent protection to AI-generated concepts, such as:

  • Impact on the Copyright law: A result of such a decision may lead the courts to re-examine the subject of AI authorship and regard AI as a creator of AI-generated works, which will open a Pandora’s box of judicial conflicts.[3]
  • It could potentially raise the bar for innovation or fundamentally alter the definition of a ‘person skilled in the art,’ making it more difficult for human innovators to obtain patent protection.
  • Accepting inventorship to include AI systems would elevate AI to the status of a legal person, allowing it to hold and exercise property rights.
  • It raises concerns about who has the right to use or own the AI-created product. As the AI system is not a legal body, it cannot enter into agreements allowing it to transfer its inventorship rights.

The ability of an AI to be an inventor under patent law will be determined by the specific language in each jurisdiction’s patent laws. To explicitly incorporate and recognise AI-generated ideas, legislative changes and amendments may be required in nations where plain statutory wording needs an inventor to be a natural person. In places where the statutory language is less explicit, such as Australia, the courts may have additional freedom to consider purposeful statutory interpretation and policy considerations.[4] We anticipate that all IP offices adopt a unified approach to successfully address the emerging difficulties posed by inventions by AI.

References: 

[1] Section 6, the Patents Act, 1970.

[2] Shri Justice N. Rajagopala Ayyangar, Report on the revision of the patents law, 1989.

[3] Rita Matulionyte, Australian court says that AI can be an inventor: what does it mean for authors? Kluwer Copyright Blog (September 2021).

[4] Lam Rui Rong, Can Artificial Intelligence Be an Inventor Under Patent Law? Australian Federal Court Says ‘Yes’ but U.S. District Judge Says ‘No’, SKRINE (September 2021).

Image Credits: Photo by Gerd Altmann from Pixabay 

The ability of an AI to be an inventor under patent law will be determined by the specific language in each jurisdiction’s patent laws. To explicitly incorporate and recognise AI-generated ideas, legislative changes and amendments may be required in nations where plain statutory wording needs an inventor to be a natural person. In places where the statutory language is less explicit, such as Australia, the courts may have additional freedom to consider purposeful statutory interpretation and policy considerations.

POST A COMMENT

There is a Tide in the Affairs of Men…and Nations too

Three decades ago, the mobile revolution helped India overcome its communication challenges. Today, mobile phones have become a commodity in India. At least feature phones have, even if smartphones haven’t. But if you are old enough to remember India during the mid-1990s, you will know that India’s fixed line telephone density was very low at that time. Getting new telephone connections was tough, and involved waiting periods that often extended to several months. Due to ageing cables, making telephone calls was a challenge, and even when calls were connected, the quality was poor.  

Mobile communication technologies unleashed a powerful revolution that changed all this. Even far-off locations where laying fixed-line cables was a challenge got access to mobile towers and signals. So huge has been the transformative power of mobile technologies that an entire generation of regulatory reforms, business models and lifestyle paradigms all depend on the ubiquitous mobile phone.

Why is this relevant now?

Today, the world is on the threshold of a new breed of technologies such as AI/ML, Robotics, IIoT, Blockchain, Cloud, Analytics, Drones, Autonomous Vehicles, the Metaverse etc. Collectively and individually, these technologies have the potential to transform the world as we know it to a much greater degree. Indeed, the next decade may witness the greatest changes driven by technology in the recorded history of humankind.

The reason why it is important to be cognizant of this and take timely action. There are no established leaders in these areas because the sectors, their impact and tech are still evolving. India as a country has the technical and commercial savvy to harness these new technologies and drive innovations. What is needed is the educational and industrial framework to ensure that students get to acquire and sharpen their expertise in these new areas and start applying them to solving real-world problems. The National Education Policy is one step in this direction, but implementing it in the right way is key. Not just the curriculum, but the whole system of education must change. Internships must become more focused and integrated with the learning process, and not just a certificate-driven activity as it largely has been (and is).

It’s not just the central government that needs to act with alacrity and vision; state governments also need to formulate the right policies and rules to ensure that the country as a whole is able to take advantage of the massive disruption that is occurring all around us. Some states have woken up to this need and are putting in place plans to encourage entrepreneurs and attract investments into key sectors. The initial agreement to set up a chip-making facility in Karnataka is one example- but it’s early days yet, and many more hurdles need to be overcome.

The startup ecosystem, too, needs to readjust its approach to backing ventures in these new areas. Yes, the risk will be higher and the failure rate may be higher, but these ventures must be seen as proving grounds for technologies and ideas. Our private sector must also be ready to make the necessary investments to embrace these new technologies and lead innovation and adoption. Our large IT services industry must accelerate the shift to provide offerings built around these new areas. A lot is already happening, but the pace must pick up. India’s public sector, long regarded as a white elephant, can also play a key role by absorbing these technologies and innovatively deploying them in sectors of national importance, such as energy, agriculture, disaster recovery, infrastructure development, defence etc.

Achieving all this requires macroeconomic stability: inflation under control, relatively stable exchange rates and an adequate money supply. For a number of reasons that are outside the control of our government or individual companies, these conditions may not be met immediately. But as responsible citizens, business leaders, regulators, teachers and parents, each one of us has a role to play. Of course, the executive, the legislature and the judiciary also have their own roles to play.

To quote Brutus from Shakespeare’s play “Julius Caesar”,

“There is a tide in the affairs of men
Which, taken at the flood, leads on to fortune;
Omitted, all the voyage of their life
Is bound in shallows and in miseries.
On such a full sea are we now afloat,
And we must take the current when it serves,
Or lose our ventures”.

This is very much the situation that much of the world finds itself in at this time. If we in India can rise to the occasion, our continued ascendancy as a power is assured. But there is many a slip between the cup and the lip, and if we squander time and energy on needless and irrelevant issues, it is just as certain that we will not realise our potential. Let us make the right choice.

Image Credits: Photo by Pete Linforth from Pixabay 

Today, the world is on the threshold of a new breed of technologies such as AI/ML, Robotics, IIoT, Blockchain, Cloud, Analytics, Drones and Autonomous Vehicles, the Metaverse etc. Collectively and individually, these technologies have the potential to transform the world as we know it to a much greater degree. Indeed, the next decade may witness the greatest changes driven by technologies in the recorded history of humankind. The reason why it is important to be cognizant of this and take timely action. There are no established leaders in these areas because the sectors, their impact and tech are still evolving.

POST A COMMENT

Core Legal Issues with Artificial Intelligence in India

The adoption and penetration of Artificial Intelligence in our lives today does not necessitate any more enunciation or illustration. While the technology is still considered to be in its infancy by many, so profound has been its presence that we do not comprehend our reliance on it unless it is specifically pointed out. From Siri, Alexa to Amazon and Netflix, there is hardly any sector that has remained untouched by Artificial Intelligence.

Thus, the adoption of artificial intelligence is not the challenge but its ‘regulation’ is a slippery slope. Which leads us to questions such as whether we need to regulate artificial intelligence at all? If yes, do we need a separate regulatory framework or are the existing laws enough to regulate artificial intelligence technology?

Artificial intelligence goes beyond normal computer programs and technological functions by incorporating the intrinsic human ability to apply knowledge and skills and learning as well as improving with time. This makes them human-like. Since humans have rights and obligations, shouldn’t human-likes have them too?

But at this point in time, there have been no regulations or adjudications by the Courts acknowledging the legal status of artificial intelligence. Defining the legal status of AI machines would be the first cogent step in the framing of laws governing artificial intelligence and might even help with the application of existing laws.

A pertinent step in the direction of having a structured framework was taken by the Ministry of Industry and commerce when they set up an 18 member task force in 2017 to highlight and address the concerns and challenges in the adoption of artificial intelligence and facilitate the growth of such technology in India. The Task Force came up with a report in March 2018[1] in which they provided recommendations for the steps to be taken in the formulation of a policy.

The Report identified ten sectors which have the greatest potential to benefit from the adoption of artificial intelligence and also cater to the development of artificial intelligence-based technologies. The report also highlighted the major challenges which the implementation of artificial intelligence might face when done on large scale, namely (i) Encouraging data collection, archiving and availability with adequate safeguards, possibly via data marketplaces/exchanges; (ii) Ensuring data security, protection, privacy and ethical via regulatory and technological frameworks; (iii) Digitization of systems and processes with IoT systems whilst providing adequate protection from cyber-attacks; and (iv) Deployment of autonomous products and mitigation of impact on employment and safety.[2]

The Task Force also suggested setting up of an “Inter–Ministerial National Artificial Intelligence Mission”, for a period of 5 years, with funding of around INR 1200 Crores, to act as a nodal agency to coordinate all AI-related activities in India.

 

Core Legal Issues

When we look at the adoption of artificial intelligence from a legal and regulatory point of view, the main issue we need to consider is, are the existing laws sufficient to address the legal issues which might arise or do we need a new set of laws to regulate the artificial intelligence technologies. Whilst certain aspects like intellectual property rights and use of data to develop artificial intelligence might be covered under the existing laws, there are some legal issues which might need a new set of regulation to overlook the artificial intelligence technology.

 

  • Liability of Artificial Intelligence

 

The current legal regime does not have a framework where a robot or an artificial intelligence program might be held liable or accountable in case a third party suffers any damage due to any act or omission by the program. For instance, let us consider a situation where a self-driven car controlled via an artificial intelligence program gets into an accident. How will the liability be apportioned in such a scenario?

The more complex the artificial intelligence program, the harder it will be to apply simple rules of liability on them. The issue of apportionment of liability will also arise when the cause of harm cannot be traced back to any human element, or where any act or omission by the artificial intelligence technology which has caused damage could have been avoided by human intervention.

One more instance where the current legal regime may not be able to help is where the artificial intelligence enters into a contractual obligation after negotiating the terms and conditions of the contract and subsequently there is a breach of contract.

In the judicial pronouncement of United States v Athlone Indus Inc[3] it was held by the court that since robots and artificial intelligence programs are not natural or legal persons, they cannot be held liable even if any devastating damage may be caused. This traditional rule may need reconsideration with the adoption of highly intelligent technology.

The pertinent legal question here is what kind of rules, regulations and laws will govern these situations and who is to decide it, where the fact is that artificial intelligence entities are not considered to be subject of law.[4]

 

  • Personhood of Artificial Intelligence Entities

 

From a legal point of view, personhood of an entity is an extremely important factor to assign rights and obligations. Personhood can either be natural or legal. Attribution of personhood is important from the point of view that it would help identify as to who would ultimately be bearing the consequences of an act or omission.

Artificial intelligence entities, to have any rights or obligations should be assigned personhood to avoid any legal loopholes. “Electronic personhood”[5] could be attributed to such entities in situations where they interact independently with third parties and take autonomous decisions.

 

  • Protection of Privacy and Data

For the development of better artificial intelligence technologies, the free flow of data is crucial as it is the main fuel on which these technologies run. Thus, artificial intelligence technologies must be developed in such a way that they comply with the existing laws of privacy, confidentiality, anonymity and other data protection framework in place. There must be regulations which ensure that there is no misuse of personal data or security breach. There should be mechanisms that enable users to stop processing their personal data and to invoke the right to be forgotten. It further remains to be seen whether the current data protection/security obligations should be imposed on AI and other similar automated decision-making entities to preserve individual’s right to privacy which was declared as a fundamental right by the Hon’ble Supreme Court in KS Puttaswamy & Anr. v Union of India and Ors[6]. This also calls for an all-inclusive data privacy regime which would apply to both private and public sector and would govern the protection of data, including data used in developing artificial intelligence. Similarly, surveillance laws also would need a revisiting for circumstances which include the use of fingerprints or facial recognition through artificial intelligence and machine learning technologies.

At this point in time there are a lot of loose ends to be tied up like the rights and responsibilities of the person who controls the data for developing artificial intelligence or the rights of the data subjects whose data is being used to develop such technologies. The double-edged sword situation between development of artificial intelligence and the access of data for further additional purposes also needs to be deliberated upon.

Concluding Remarks

In this evolving world of technology with the capabilities of autonomous decision making, it is inevitable that the implementation of such technology will have legal implications. There is a need for a legal definition of artificial intelligence entities in judicial terms to ensure regulatory transparency. While addressing the legal issues, it is important that there is a balance between the protection of rights of individuals and the need to ensure consistent technological growth. Proper regulations would also ensure that broad ethical standards are adhered to. The established legal principles would not only help in the development of the sector but will also ensure that there are proper safeguards in place.

In this evolving world of technology with the capabilities of autonomous decision making, it is inevitable that the implementation of such technology will have legal implications. There is a need for a legal definition of artificial intelligence entities in judicial terms to ensure regulatory transparency. While addressing the legal issues, it is important that there is a balance between the protection of rights of individuals and the need to ensure consistent technological growth.

POST A COMMENT