Exploring How Lawyers Drive Innovation And Accountability in the Digital Age with Smita Rajmohan

I believe the landscape of technologies will continue to evolve, alongside shifting focuses.

In the rapidly evolving landscape of technology, the role of lawyers has undergone a profound transformation, particularly with the advent of Generative AI. Over the past 11 years, professionals like Smita Rajmohan stand out as seasoned professionals with extensive experience as in-house counsel for tech giants like Apple and Autodesk. Currently serving as Senior Counsel to the VP of AI platform and Identity at Autodesk, Smita plays a pivotal role in shaping generative AI development and adoption.

As the conversation in the legal realm shifts towards addressing the legal challenges posed by emerging technologies, lawyers find themselves at the nexus of innovation and regulation. In this podcast, we explore the invaluable contributions of lawyers in the AI world, examining their journey, evolving responsibilities, and the pivotal role they play in shaping the legal and ethical frameworks surrounding AI development and deployment. Through insightful conversations with Smita Rajmohan, we delve into the intersection of law and technology, uncovering the strategies, considerations, and future prospects for lawyers in the age of AI.


AIM: How has your professional role evolved over the past 11 years, considering the significant shift from traditional data analytics to the emergence of Generative AI? Specifically, how has your journey transformed in response to the evolving landscape of AI and data technologies?

“I believe the landscape of technologies will continue to evolve, alongside shifting focuses.”

Smita Rajmohan: Much of it was centered around big data and statistical analysis. We then ventured briefly into blockchain and crypto before delving into privacy concerns, and now, we find ourselves amidst the era of AI, particularly Generative AI. I believe the landscape of technologies will continue to evolve, alongside shifting focuses. However, the essence of lawyering remains consistent: identifying issues, addressing them with existing laws, and contemplating potential legal ramifications, even in the absence of specific legislation.

AIM:In the evolving technological landscape, particularly with advancements like Generative AI, how do you address legal challenges and navigate regulatory frameworks that may not have caught up with these developments?

“Technology lawyers were generalists, handling intellectual property, commercial contracts, and privacy policies all at once.”

Smita Rajmohan: The evolution of issues is fascinating, particularly considering the absence of significant attention to AI and privacy in the past. When I began my journey as a lawyer, privacy wasn’t even a distinct practice area at law firms; it was rather unconventional. Technology lawyers were generalists, handling intellectual property, commercial contracts, and privacy policies all at once. However, this landscape has evolved into specialization, with dedicated privacy law practitioners emerging.

Nowadays, we witness the rise of niche practices, such as privacy law specialists. Similarly, the emergence of AI law as a specialized field is becoming apparent. The core objective remains consistent: addressing the significant challenges in AI, primarily rooted in data. For those like myself, who have navigated data issues in technology law for years, the transition to AI law is seamless.

AIM: How can lawyers bridge the gap between legal frameworks and the ethical considerations crucial for responsible AI development?

“Neglecting data governance can lead to significant legal and financial consequences.” 

Smita Rajmohan: There’s a well-known adage in AI circles that states, “garbage in, garbage out.” This essentially means that without proper data governance, and merely using data for its own sake, you are likely to encounter several problems. Firstly, the quality of your algorithms may be compromised, leading to suboptimal outcomes. Additionally, this lack of precision in handling data increases the risk of data security issues. With an abundance of unclassified data, you’re not only risking its integrity but also non-compliance with stringent data protection regulations such as the EU’s GDPR. These regulations empower data subjects with rights to restrict the use of their data for machine learning purposes or to request data deletion.

Without robust data governance, it becomes challenging to track which datasets were used to train specific models and to honor the rights of individuals connected to that data. Failing to comply with such requests can result in severe financial penalties, including fines up to 4% of your annual gross revenue for violating data protection laws. In summary, neglecting data governance can lead to significant legal and financial consequences.

AIM: What ethical considerations do you believe are essential to address in the realm of AI, particularly from your perspective as a lawyer? How have you encountered these concerns in your roles? And why do you believe it’s essential to involve ethicists or lawyers alongside technologists in addressing these issues?

“Ethics step in to address gaps in legal coverage, particularly when unforeseen harms arise… The involvement of lawyers, ethicists, sociologists, and political scientists in this process is crucial to prevent biased and harmful outcomes.”

Smita Rajmohan: I view ethics as a form of soft law, where guidelines exist even without enacted legislation or court rulings. While laws may not always keep pace with technological advancements, we inherently understand what is right and wrong. Consequently, ethics step in to address gaps in legal coverage, particularly when unforeseen harms arise.

Many companies now incorporate ethics principles and community guidelines into their policies, reflecting societal agreements. The involvement of lawyers, ethicists, sociologists, and political scientists in this process is crucial to prevent biased and harmful outcomes. Such outcomes not only contradict the objectives of humanity but also detrimentally impact business bottom lines.

While it’s commendable for technologists to engage in this area, it’s equally essential to include individuals who understand the perspectives of everyday users, such as those in law, sales, or marketing. These individuals possess insights into user desires, needs, and expectations regarding data privacy and AI, ensuring the development of products aligned with user interests.

AIM: How can lawyers develop effective communication strategies to ensure transparency in AI applications, especially considering the varying complexities involved? In your experience, what successful strategies have you employed to address transparency and explainability challenges in AI projects throughout your career?

“Transparency in AI translates to clear, public-facing statements that convey crucial details about the product, including its limitations and appropriate usage.”

Smita Rajmohan: It’s a challenging question because, as you mentioned, there’s often a “black box” aspect to it. However, one valuable skill honed as a lawyer is the ability to distill complexity into simplicity. Contrary to popular belief, effective lawyers don’t rely solely on jargon. In fact, part of our role is to ensure clarity and transparency in communication.

Consider any consumer product you encounter—it comes with labels, disclaimers, and essential information, all scrutinized by lawyers. Transparency in AI translates to clear, public-facing statements that convey crucial details about the product, including its limitations and appropriate usage.

Here’s where ethics intersect with a lawyer’s responsibility. We not only highlight the capabilities of AI but also emphasize ethical considerations. For instance, we caution against weaponizing AI or enabling discriminatory practices through automated decision-making. We advocate for transparency regarding bias testing and potential misuse.

This transparency fosters consumer trust and ensures that users feel heard and respected. Accountability and explainability, among other concepts, are integral—areas familiar to technologists but longstanding concerns for lawyers. We’ve long navigated questions from both users and regulators, requiring us to articulate product development processes and data handling practices.

In essence, lawyers wear multiple hats—we serve as marketers, defenders of regulatory compliance, and investigators, ensuring ethical and legal integrity in technological advancements and we’re trying to understand how something is really built so that we are able to best defend it.

AIM: What legal considerations should businesses and lawyers prioritize when establishing data governance frameworks for Gen AI projects? How can these frameworks address the challenges of data trust and sharing, particularly when synthetic or generated data is involved?

“Contractual frameworks outline responsibilities, such as ensuring the confidentiality of customer data and specifying security measures for datasets used in machine learning.”

Smita Rajmohan: Many of these issues find resolution within contractual agreements. Contractual frameworks outline responsibilities, such as ensuring the confidentiality of customer data and specifying security measures for datasets used in machine learning. Liability and indemnification clauses address potential damages, with lawyers facilitating agreements between customers and vendors.

These clauses serve to protect both parties and establish governance practices. In the event of regulatory inquiries, companies can demonstrate adherence to established standards by showcasing contractual stipulations and vendor requirements aimed at safeguarding consumer information.

Additionally, industry-standard frameworks like the NIST framework and risk management protocols provide valuable guidance. As part of the AI safety consortium, efforts are underway to develop companion resources to enhance risk assessment and management in AI applications.

Furthermore, international standards are being established to address varying needs and risk levels. It’s essential to recognize that not all AI applications carry the same level of risk; distinctions exist between, for instance, recommendation algorithms used by streaming platforms and those powering self-driving vehicles. Understanding and applying the appropriate standards and practices tailored to each context are paramount.

AIM: What are the key priorities for organizations like the EU, OECD, NIST, and governmental bodies in addressing AI-related risks? Where have they succeeded, and where should they focus to ensure the safe use of AI technologies, especially concerning EIM data, from a legal perspective?

“As lawyers, participating in these discussions is invaluable as we seek answers to practical questions, such as whether AI-generated content requires specific labeling or attribution.”

Smita Rajmohan: In general, it’s crucial to recognize that companies and organizations lack the authority to enact legal changes, even with executive orders. Instead, such directives serve as guidance for lawyers to assess risks effectively. A beneficial aspect of this guidance would be clarification on potential harms and corresponding mitigation strategies.

Within the AI safety consortium, various working groups focus on safety, security, and the authentication of synthetic content. As lawyers, participating in these discussions is invaluable as we seek answers to practical questions, such as whether AI-generated content requires specific labeling or attribution.

Understanding the implications of such decisions, including their potential impact on intellectual property rights, is essential. While agencies can offer technical advice on issues like deep fake authentication, legislation may be necessary to enforce certain practices.

It’s important to note that while guidelines and frameworks are helpful, they are voluntary. Ultimately, the responsibility lies with legislators to pass laws mandating specific actions. Thus, while lawyers can advocate for necessary measures, meaningful change often requires legislative action.

AIM: How challenging is it for lawyers without a background in technology to understand AI concepts? How can collaboration between lawyers and technologists be improved, and what level of technological understanding is essential for lawyers in addressing AI-related legal and ethical issues effectively?

“I think of myself as a legal business partner to a tech team, just like PMs, project managers, program managers, and scrum teams.”

Smita Rajmohan: I used to know how to code, so it’s not like I walked into it without knowing anything about software. To be fair, if you’re a patent lawyer, you don’t necessarily need to understand literally how to make a thing; you need to understand how to prove that something is novel and non-obvious, right? But I don’t think you necessarily need to know how to build a neural network from scratch.

If you don’t understand the intricacies and fundamentals of how the neural network is built, you might miss out on and My best friends are technologists, so I don’t know who these lawyers are that don’t want to talk to technologists. I think of myself as a legal business partner to a tech team, just like PMs, project managers, program managers, and scrum teams.

I’m like a legal and governance expert. At least in tech companies here, if you’re something like a product council, which is what I am, you pretty much get attached to the product team from the start. So I’m there when the product is being conceived all the way to launch and post-launch. It’s a great way of understanding; I’m in the room where they’re thinking about building the thing, which is just a great osmosis.

And that’s true, not just for software. Before this, I was at Apple, where I was in the hardware team. I knew nothing about hardware, but because I was there, I was able to pick up: this is what it means, this is what we use, these are the sensors that we use, this is how something is built, this is the silicon, this is how, you just pick it up; you just have to be curious as an attorney.

Attorneys are nerds and they like learning. You can’t be a lawyer if you don’t like to learn. They’re generally pretty curious and open. If you’re that kind of person, then I think you would love actually learning something new, especially if you enjoy the technology itself.

AIM: How do you foresee the role of lawyers in the field of AI evolving, particularly with the increasing focus on processing vast amounts of data to develop large language models?

Smita Rajmohan: I think lawyers have already become privacy professionals, more generalists around protecting privacy. Now, we’re going to see a refocus on intellectual property rights, which excites me as I started as an IP lawyer. There are many implications of how data is used to create something and the IP perspective on it. Whether you can gain protection from it.

You probably saw the New York Times sued OpenAI over some IP infringement issues. There are lots of interesting IP topics. So I believe the role of a lawyer will evolve into more of a governance professional or someone involved in ensuring AI is safe and responsible to enter the space.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!