After several decades of artificial intelligence development, we have seen a dramatic increase of AI adoption in various forms over the last year. While organizations are quickly learning how to leverage the seemingly endless opportunities, we also increasingly hear from legal counsels who are overthinking perceived legal risks and the best ways to address them.
In order to contribute to this process, Kennedy Van der Laan recently hosted more than twenty senior legal counsels at its Amsterdam offices for an intimate panel discussion with a delegation from GitHub, the global developer platform that offers GitHub CoPilot. Released in October 2021, this AI-powered technology allows developers to scale and deliver software in ways previously unseen. Based on user’s comments and code, it instantly suggests individual lines and whole functions, thereby allowing developers to code faster and with less work. With all the attention amongst developers, GitHub Copilot has by now become the world’s most widely adopted AI developer tool.
To lay some common technical ground for our legal discussion, Ryan Salva (VP of Product at GitHub) first explained the fundaments of the generative AI model, how it was trained on natural language text and source code from publicly available sources, and how the AI model processes user inputs to generate outputs.
After gaining a better understanding of this technological background, Shelley McKinley (Chief Legal Officer at GitHub) then engaged with participants about some of the hurdles faced by the legal counsels when their clients adopt new technologies. On the one hand, the hurdles might involve user inputs: does the user retain intellectual property rights to the inputs? What if inputs contain personal data or sensitive corporate information? Will inputs be used for further training of the application? On the other hand, hurdles might exist around the outputs of those applications: who owns the intellectual property rights to those outputs? What if the outputs contain third-party material or preexisting intellectual property? Who is responsible for “hallucinations” which occur when confidently provided outputs are actually incorrect?
After brainstorming about how to overcome these hurdles, the participants exchanged thoughts about the factors that engineering teams consider when evaluating whether to adopt AI tools, and how can their legal counsels can help them with their evaluation. Finally, the participants discussed how to ensure that internal use of AI tools is aligned with organization’s values. After all, many participants agreed that even when AI-driven technologies can be relied upon to enhance efficiency and increase effectiveness, responsible use always involves human oversight, and therefore awareness and commitment by all end users involved.
Kennedy Van der Laan will be planning similar events in the future. Meanwhile, for more insights about the intersection of law and technology, sign up for the Kennedy van der Laan Technology newsletter here [->aanmeldknop].
GitHub x Kennedy Van der Laan event
2 October 2023