Insights
8
mins read

What do the new USPTO guidelines about using AI mean for you and patent research?

A few days ago, the USPTO released additional guidelines about the use of AI in interactions with the USPTO. What do they mean in relation to using AI for patent research and analysis? Andreas Cehlinder gives a summary.

At this point, it’s clear that using AI in patent work is inevitable, and that AI is providing great benefits for everyone involved – from the inventors and organizations that are looking to protect their innovation to the practitioners that represent them to the authorities themselves.

A few days ago (April 11), the USPTO released additional guidelines (which built on guidelines released in February) about the use of AI in interactions with the USPTO. The guidelines apply to anyone who practices before the USPTO and to all interactions with its different organs (including PTAB etc).  

These new guidelines come at the back of President Biden’s executive order about AI, which was published towards the end of last year. This recognizes that AI has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative and secure (I like this statement!), but it also recognizes that AI may be posing risks around security, fraud, and even national security.

While the examiners are relying more and more on AI, e.g. for prior art searching and other parts of the examination, it’s clear that law practitioners are also doing the same.

With this in mind, there are several areas where these guidelines provide clarity. But, what do they mean, in particular in relation to using AI for patent research and analysis? This short post attempts to summarize the guidelines and provide answers to this question.

To start with, the USPTO goes through the rules that were already in place, which are relevant to these new guidelines. These are, in brief summary: 

  • Every practitioner that interacts with the USPTO needs to adhere to rules about “Candor and good faith”. This ensures the integrity of the proceedings and the accuracy of decisions, and means that practitioners cannot take any fraudulent or misleading actions or withhold relevant information in their interactions with the USPTO.
  • Further to this, there are rules and regulations about “Signature requirements and corresponding certifications”, stating that submissions and correspondence with the USPTO must be signed by the individual submitting the correspondence. This ensures the accuracy and integrity of the statements made.
  • There are also rules about “Confidentiality of Information”, which prohibits a practitioner to reveal information to third parties unless their client gives some sort of consent, and states they are required to take reasonable measures to avoid that such information is disclosed. For example in a prior art search or drafting of applications.
  • The rules about “Foreign filing licenses and export regulations” mean that practitioners must comply with foreign filing license requirements (which can be obtained from the USPTO in various ways) before exporting technical data relating to US inventions when preparing and prosecuting patents outside of the US.
  • The USPTO also has “Electronic systems’ policies”, meaning that any information or service that the USPTO provides through their electronic platforms is subject to terms and conditions, and individual user registration and logins to these is usually required.
  • Finally, there are rules that regulate “Duties owed to clients”, which dictates that anyone representing a client before the USPTO needs to be competent and diligent in their representation. This includes staying abreast of the benefits and risks of using new technology, such as Artificial Intelligence, and to some extent explain the use of these technologies in the representation so that the client can make informed decisions.

How should the guidelines be interpreted?

Using software and AI for e.g. prior art searching and drafting patent applications is not prohibited, and there is no general obligation to report that such tools have been used in the preparation of material, but the rules and guidelines above need to be applied when using them. So, how should the rules above be interpreted when AI is used? 

  • Firstly, the signature requirement means that when a document has to be signed by a human. The signature is a guarantee that the content of the document is accurate, and that human verification of the accuracy, even if the content has been (in various degrees) generated using AI, has taken place. Using AI to validate the accuracy is not enough, and things like AI hallucinations or misstatements need to be identified and rectified before the documents are submitted. Prior art searches and IDS submissions are other examples of this requirement, where results may not necessarily be “hallucinations”, but they may be irrelevant to the matter at hand. These too need to be reviewed and validated.
  • Secondly, the candor and good faith requirements should be applied when facts are submitted to the USPTO bodies, e.g. in situations where AI has been used to draft claims for an invention and the inventor has knowledge that one or more of the claims did not have a significant contribution from a human (which we already know is a requirement), but has been added by the AI, this has to be disclosed. While not specifically stated, this should also apply to submission or omission of relevant prior art.
  • Further, when interacting with the USPTO through their websites of IT systems, it’s worth noting that access to these systems is granted to authorized individuals and that AI is not allowed to get an individual login or account. AI is not an individual and cannot be a “user” of these tools, which means that it may not submit certain information either (also, see above regarding signature requirements).
  • Finally, the confidentiality rules require that a practitioner is cautious when using AI in the preparation of documents that are included in USPTO correspondence, and that they ensure that confidential information is not inadvertently disclosed in this process. Some AI applications may store such confidential information, and even use it for further training, and this may be a breach of the confidentiality obligations. The typical illustration would be uploading an invention disclosure to an AI tool to run a prior art search or draft claims. When building or buying such tools, the users must be “especially vigilant” to ensure that confidentiality is maintained, and risks of security breaches and other such risks must be taken into consideration.

How does it relate to IPRally?

So, it’s clear that some considerations have to be made when deciding if and how to use AI in patent searching or preparation of documents that are going to the USPTO. The question is whether these clarifications change much. Common sense tells us that AI cannot be an individual, or advise clients with the same rigor and accountability as a human attorney. Information has been disclosed in third party software before (docketing, drafting tools etc), and the same rules about confidentiality applied then. AI adds additional questions, but shouldn’t make a fundamental difference in how we approach data security and confidentiality. And we should always ensure that information is accurate before signing it, whether it’s AI generated or not. I guess time will tell what it means to be “cautious” and “especially vigilant”, and that we’ll see examples of when things don’t go according to plan.  

As this relates to IPRally, which is a tool that many use in preparation of material that goes in front of the USPTO (and other such authorities), we are proud to have focused on these aspects from the start, while building our solutions. 

  • In order to be able to advise clients, and validate results, the tools need to provide transparency in models and explainable and verifiable results.
  • To ensure confidentiality, any third party tool or AI needs to be very secure, and minimize the risk for security breaches or that submitted data is retained and/or used for training of the public algorithms.
  • The AI should be trained on accurate and validated, curated data from reliable sources, and the risk of hallucination needs to be minimized by controlling the AI that is developed/used.

Anyone who is using IPRally should be confident about these aspects, as (1) transparency and explainability are some of the key benefits of Graph AI compared to other AI models, (2) your data is double encrypted and very secure in IPRally, and never retained or used for training the public algorithms, and (3) we only use curated and structured data to train our models.

What do you think? Will these guidelines change how you use software and AI in your preparation work? Will they change, and provide additional checklists for, how you assess which tools you should (and should not) use? Let us know – we’re looking forward to hearing your thoughts!

Andreas Cehlinder
April 22, 2024
5 min read