Taylor Swift’s experience with malicious deepfake photos earlier this year brought to light a concerning fact: this type of digital abuse disproportionately targets women and girls. While political personalities are frequently the focus of attention regarding deepfake abuse, the wider effects on regular people—especially women—are frequently disregarded.
This lack of focus is personal to Melissa Hutchins. Hutchins has personally experienced the helplessness that such attacks may engender, having been the target of a severe cyberstalking case. She was motivated to intervene after enduring a protracted court struggle against persistent violent threats for years. She co-founded Certifi AI, a Seattle-based firm, in late 2023 with the goal of detecting and stopping deepfake abuse and providing clients with the means to safeguard their identities and themselves.
Melissa is a seasoned product leader with a strong track record of developing highly technical solutions for Fortune 500 companies. With expertise in API development and AI/ML capabilities, she has consistently delivered impactful innovations throughout her career.
Driven by a deeply personal mission, Melissa transitioned from her corporate role to tackle the pressing challenges of cyber abuse and building an ethical AI infrastructure. A survivor of severe cyber harassment and stalking, she has channeled her experiences into creating tools that safeguard foundational AI in the digital age. Under her leadership, Certifi AI is dedicated to advancing technology that detects, distinguishes, and verifies the authenticity of digital content, empowering users while fostering trust and accountability in an increasingly complex digital landscape.
Kashyap Raibagi: Hello and welcome everyone to the next episode of the AIM Media Host podcast, Simulated Reality. Today we have with us the founder and CEO of Certifi AI, Melissa Hutchins. Melissa, thank you so much for making the time today. How are you doing?
Melissa Hutchins: I’m doing great. Thanks for having me.
Kashyap Raibagi: Melissa, I’ve heard your story. I have read about it in a little more detail and it’s quite inspirational to sit down with you to talk about this startup of yours at the same time it’s a perfect example of how you can turn your weaknesses into strengths and how you can probably take your experience and use that to your power and I’m excited for our audience for them to hear your story. Before we dive into the Certifi AI journey, if you’re okay with it, would you like to share this? What inspired you to this startup?
Melissa Hutchins: I think specifically for the company that I built, it was from a very personal place and a personal experience. So going back to the beginning, I had worked in developing technology as a product manager in Seattle, mainly specializing in API development and machine learning capabilities for images and video content. So these kinds of things would include algorithms for personalizing the experience. And I just really fell in love with building technical solutions from ideation to launch and continuing to iterate. I think there are a lot of entrepreneurial tendencies when it comes to product management because you are going from the problem that we need to solve, this is the idea that we’re keeping our users, what’s going to be the best solution for our users and then continuing to iterate that. I fell in love with that type of building process.
Unfortunately for years I had during the same time I had been the target of a severe cyberstalking case. So essentially I had sought protection from an individual and the stalker weaponized any existing technologies that were available to terrorize not only just myself but any identifiable support system. This included friends, family, police investigators, lawyers, prosecutors, and even federal judges. It was very extreme in that case and ultimately reached federal authorities where he was found guilty on all counts and sentenced to 9 years in federal prison. That being said, this was a really big milestone and wouldn’t have been possible without the years of work from the US Attorney’s Office, Secret Service, and the SPD. But it took five years to get there and that was very mind-blowing in a lot of ways.
I learned so much from that experience and knew that I needed to use what I learned for good and to address the gaps between advancing technology and user protection. I also knew how easy it was for bad actors to manipulate technology and how difficult it can be to hold somebody accountable to prosecute cyber crimes. There are so many ways that you can mask identity through VPNs and fake accounts, such as anonymous accounts, anonymous emails, and phone numbers, you name it. It’s just very difficult to pinpoint the person responsible and have them held accountable. So, that being said, in early 2023, when I saw this explosion of AI technology that was being made publicly available with little to no safeguards in place, I saw image generation tools that allowed you to create hyperrealistic scenarios super quickly of essentially whatever you want. And that was the catalyst. That was the moment I knew I needed to do something and do something now. I knew that I couldn’t wait for someone else to be that advocate to know all of the gaps that I experienced. So I started my own company. I left the corporate world and started my own company to create the change that I needed to see and the change that I knew so many other people needed to see to protect vulnerable individuals who are at risk and build technology that I wish was available in my case.
Kashyap Raibagi: Yeah, when you tell this story, I think there were a lot of factors that contributed to this story, especially when you had to fight with the miscreant and to make sure that you get justice for it. At the same time, my point is looking from a broader perspective. There was technology, there were lawyers, there were policymakers, and there were so many factors that were involved. But did your story start with counteracting technology with technology? Why is that important to you? Is it just because of your experience in the past working in technology or do you also believe that technology is very important to counter technology?
Melissa Hutchins: I think my previous experience was really helpful and I had a lot of subject matter expertise from the machine learning side specifically on images, video knowing how to train models to detect whether it’s like image tags like a lot of machine learning capabilities for images is that there’s a bunch of different tags associated within an image. So you could say that for instance a picture of you at a park with a friend has all of these associated tags that are tied to that image and those all play into detection ranking images and I had a lot of expertise in that space that made me feel more comfortable diving into this area and starting my own company. But definitely from my own experience that was the fuel that I knew could sustain all of the hard times. I knew that I had something worth fighting for and I knew that I had something to go back to at the end of the day that was way too important to leave behind or to rely on someone else again to be that advocate. It became very clear that for this type of technology to be fully serving victims, to fully serve survivors, to serve future generations, we need those diverse perspectives of people who know exactly what it’s like to go through that process. It was a combination of both but I think it also really gives us a unique advantage myself and my team towards fully supporting and tackling this problem.
Kashyap Raibagi: The reason why I ask you this question is because technology is advancing at a very rapid pace and it serves the purpose for many good things but at the same time bad things as well right it can bad actors at scale which is the fundamental conversation that I want to have but someone who doesn’t have a background from technology and someone who wants to solve a problem it could be in this space of deepfake abuse or beyond that any other problem which is there primarily moving towards making the technology safer or in any other case how does one start leveraging technology in a way that can help them solve the problem when they don’t understand it in that detail.
Melissa Hutchins: As far as our technology, I want to speak to that a little bit because I didn’t have a degree in computer science. I came out of college and was learning a lot on the go. And I think what’s important in this new era is being very just adaptive and learning and soaking up as much of this new technology as possible and continuing to stay curious as well with what we’re working on. And at its core, it serves multiple critical functions. First, being able to detect and verify manipulated content to prevent widespread dissemination. So this is crucial for minimizing potential harm. So when victims can essentially quickly identify and flag harmful content, we can notify platforms that the content needs to be taken down and reduce that window of vulnerability. That being said, technology alone isn’t going to be enough, especially in the space of deepfake content. We’re developing more of a holistic solution that combines technical capabilities but also survivor-centered support means creating intuitive reporting mechanisms and documentation of the evidence being that it’s so important in these cases and having a step approach that people can follow along with resources that prioritize their agency. We want people to feel like they have control not just over their content but over their narrative and have a platform that’s designed to also be a tool for law enforcement and aid in these types of investigations. One of the most eye-opening things for me was the level of extreme cases needed to reach before a robust digital forensic investigation was brought in. I had no idea that the Secret Service investigated these types of cyber criminal cases. I thought the Secret Service protects the president. And so just learning and absorbing all of this information and adapting I think is important. And at the same time when building this time a tool understands that each survivor’s experience is incredibly unique. So building in customizable settings like being able to report and have direct communications to support networks and lastly, I think the most powerful aspect of technology in this context is the ability to scale compassionate intervention and so by developing an intelligent sensitive tool, we can be able to provide immediate global support to individuals who might otherwise feel super isolated or powerless. And it’s not just about stopping the harm. It’s about creating a digital landscape that respects individual dignity and consent. And that’s really what is at the forefront of what we’re doing. and even though I didn’t start as a super technical person, just latching on to those, this is a problem that needs solving and this is technology that can really bring that solution to scale and then just being super curious for the technical people around you. Don’t be afraid to ask questions about how something works because more often than not people are very very excited to answer those questions for you and just continue to stay curious as technology advances.
Kashyap Raibagi: Let’s talk about the solution that you’ve built a little bit right but before that help us understand the problem statement because in your case it was cyberstalking in problems that you are trying to solve and various cases have come out in terms of a deepfake some videos were built during election time as well and we could see so many things that are so many politicians so so many people individuals being impacted by this at a very individual scale as well. Secondly, can you help define what this problem statement could be if we don’t control what this is and then probably help us understand your vision of how you are solving the problem currently and then what you want to do for the future?
Melissa Hutchins: Addressing your first piece when it’s more focused on some of the political discussions and deep fakes that we’ve seen in the media. I would say the most critical misconception about deepfakes, at least in my opinion, is that it’s simply a technological problem because it’s a deeply personal violation of someone’s privacy that predominantly targets women and girls. So 96% of deepfakes that are generated are sexually explicit. 99% of which depict women or girls. This is something that we’re seeing. This is the data that we’re seeing. But a lot of the news is just highlighting whether it’s like celebrities or political figures. and while these political deep fakes make headlines, millions of everyday individuals, especially young women, we’re seeing this in journalists as well, marginalized communities, like they are silently experiencing this form of violence. I would emphasize that the tech community has a moral imperative to move beyond just performative conversations or discussions and take concrete action toward tackling this because that’s what matters at the end of the day.
And so this means two fundamental shifts in the way things work and different structures. So first we need to be able to establish guard rails like concrete guard rails to prevent malicious use of AI technology. One piece of legislation is called the Take It Down Act. This is a current piece of legislation that would essentially criminalize non-consensual intimate imagery including AI-generated.
Second, I believe we need to integrate survivor-led perspectives into these types of solutions and tech technical design. Too often I think solutions that are being built may be lacking the understanding of lived experiences for those that are most impacted by this issue. And if you have a narrow set of perspectives in the development room at any time you’re going to be missing something like you just if you don’t have those perspectives in the room or somebody to raise a hand at a feature that might be problematic that’s going to be a really big issue for those people that live outside of that perspective. So that’s another piece.
Educational initiatives are crucial as well. We need to be able to have a comprehensive program to teach digital literacy consent and online boundaries as well as the psychological impacts of this type of digital abuse. And it’s going to be a cultural shift in a lot of ways because young people especially need to understand that generating or distributing this type of manipulated content isn’t just like a prank or a joke. It’s a serious violation of someone’s dignity and there are going to be serious repercussions for that. I would love to be able to see more support from the tech community in developing standardized technologies that work in tandem with legal enforcement. So this includes advanced verification and evidence collection along with platforms that prioritize victims and their safety which essentially is what we’re working to provide. And to go to your second point of what we do and speak a little bit more to Certifi AI, we’re creating a comprehensive solution that addresses these wide sets of challenges associated with non-consensual intimate imagery and AI-generated abuse.
So our technology does three critical things. First, we’ve developed an advanced content providence system that registers and secures original media files and we can automatically notify content owners if their images are being used or posted without their consent. Secondly, we’ve built a reporting infrastructure that can quickly flag and facilitate getting abusive content taken down across various platforms. Third, being able to provide organizations, law enforcement, and individuals with a secure digital toolkit essentially gives them more control over their online presence. And I’ve learned a lot since starting Certifi. My vision started with detection and being able to distinguish what has been manipulated, and what’s original content, but it’s expanded beyond just detection and working to create an ecosystem of support from multiple angles. So this means partnering with the law and partnering with government agencies, legal resources, and tech platforms so that we can ensure a very comprehensive support system and my goal is to transform how we approach this issue and shift from historically reactive measures to more of a proactive level of protection.
Kashyap Raibagi: It is as well from the technology point of view and you had a chance to briefly show me what you’re trying to build? And right now you are saying that your technology can help people tag their photos or videos or any other content and upload it in a way that if it is misused probably you can detect it in the future. You also talked a little bit about your vision. While important, reactiveness will also play in because the technology at which generative AI is advancing and deepfake technology is going fast. There will be a need for us to detect fake videos and images on the go. Maybe I don’t know. At the same time to your point that it needs to be caught in 48 hours for so many reasons that you have obviously and all of that you did mention that you want to work in that direction. So can you help us understand a little bit about your vision in that direction as well as the reactive measure of it in your technology?”
Melissa Hutchins: Some of the points that you just mentioned when it comes to becoming aware of the existence of this content, not only that but if it is historically in a lot of cases and some of the feedback that we’ve gathered from other survivors is that a lot of times by the time people find out that there’s an image or a video of them like a deepfake out there in the world. Sometimes it’s months after it’s been posted. Sometimes it’s years and a lot of times it’s maybe like a friend reaching out to you and being notified and then quickly following up with the platforms and saying, “Hey, this isn’t me. This needs to be taken down.” So that’s where myself and my team are working more on a computer vision piece where we’ll be able to essentially have trained data on being able to know when your face or your likeness is being used online and notify you whether it was with your knowledge or without your knowledge. What exists of your likeness online and then you can have that type of awareness and then take the necessary steps forward. So if somebody posts a deepfake of you tomorrow, we can have that notification as soon as possible so that you can file a report through our platform to social media companies. It’s like, hey, this needs to be taken down. They have essentially 48 hours to do so or they’re in legal compliance, especially with this new act. And then you can track the progress of that type of report and you can see what stage it is. A report has been filed. We’ve received a response from an ex-social media company and it’s been taken down. So that’s really when it comes to making sure people have that level of power in just their online presence. Knowing when their likeness is being used through computer vision capabilities and then being able to take action being able to file a report through our system. We can do all of that heavy work so you don’t have to and then we can document all of the progress of that case should you need to reference that later on if it needs law enforcement investigation all of that is documented within our tool that can be used as a resource later on.
Kashyap Raibagi: Got it. And what are some of the other technologies that already exist that are trying to fight this? And what are you trying to build differently? I have seen some tools which are probably there online which if you upload a video can tell you whether this is a deepfake video or not. Is there some technology around that already? And what are you trying to build differently? That is also a question that I have.
Melissa Hutchins: A lot of the technologies that exist today work on a statist if we’re specifically talking about deepfake detection and analyzing whether or not there’s been artificial manipulation in a photo. Right now the technology for that is mainly statistical analysis. So you can get a gauge of the likelihood that an image has been doctored or it’s been fully AI-generated or it’s been partially AI-generated. But a lot of the tools that exist are giving a percentage score. It’s not 100% in terms of knowing. It’s looking at patterns. It’s looking at again what common patterns within generated images or AI manipulation where we can increase the likelihood of it being manipulated. That exists today and is useful I believe but we need more of a providence system and there are some organizations I know Adobe is doing some work within content providence and watermarking and hashing also something that we’re doing as well but knowing essentially having that type of nutrition label for content and who is the owner of that content. That is essentially what we do. We’re able to track through the metadata of any given image like all the information that exists, copyright, owner, description, and being able to encrypt files. So if they’re detected on a website, we can immediately or as close to as possible know about it and have it taken down. There is still a lot of work that needs to be done in this space, but it is moving. I’m encouraged by the direction that it’s moving. What I will say is that there are not a lot of companies that are centered on protecting women and girls. That was when I was doing competitor analysis in this space. Initially, I saw that it was again a very male-dominated space as tech startups usually are but it was missing a very critical perspective, and especially when you’re looking at the data of this happening 96% of the time deepfakes are sexually explicit with 99% of those being women. But we’re investing all of this money to address political misinformation, don’t get me wrong, it is very very important. But it made me realize that what we are doing and what we’re advocating for is that much more important. It’s so important for us to continue that because statistically if you’re looking at the data that’s where the impact is in this space. And so there are technologies that exist, but we’re excited to be at the forefront of advocating for victims working with law enforcement, and bringing more accessibility to digital forensics.
Kashyap Raibagi: I think one point that you mentioned stuck with me is that right now it is giving percentages but I’m assuming that these percentages of the likelihood of whether a photo or an image or sorry an image or a video is a deepfake or not but that will need a reference point with the original image or video right how difficult technologically to detect a video to be deep fake without having the original video.
Melissa Hutchins: I’m glad that you raised that point because it’s incredibly difficult, especially with the rate that image generation tools and the hyper-realistic modifications that you can make it it’s almost like as you’re looking at it, you’re still in your mind I mean historically we’ve been able to look at AI-generated images and with the human eye be able to see things that don’t quite look right and be like okay like this isn’t and using our level of reasoning and being able to make that distinction. And moving forward, they’re only this is the worst the technology is ever going to be and this is the worst those images are ever going to be. They’re only going to be getting more realistic. And that’s an area when it comes to if you don’t have ingrained in the metadata that and this goes back to providence the importance of content providence of this image was AI generated and having that sourced within the metadata is so important just for people to know the reality of what they’re consuming and it’s very useful information, but it’s very hard to like for example screenshotting is a really big issue when it comes to that type of content providence because it’s like if you take a screenshot of something even though it’s like an AI-generated photo that screenshot is a completely different image from a metadata perspective but it looks the same. So really what we and folks in the industry are trying to do is image generation and deep fakes like making that transparent to individuals and for people to know what’s real and what’s not. But there’s a lot of different challenges technically along the way. But, it’s something that we’re learning more about every single day of how we can address those challenges. And it’s possible. Even though I’ve been told a bunch of different times that’s going to be hard. and there are challenges, it’s not impossible at the end of the day. And you need that mindset in this space, or else it’s easy to succumb to that. It’s going to be difficult to succumb to that information.
Kashyap Raibagi: While we solve this problem of improving, I’m sure that Certifi AI is going through a lot of research papers. You’re tech technologists and engineers are studying this space very very closely not just from the counteracting perspective but also what is happening in the space. I think there are enough tools and enough technology available for us to prevent it from happening at all. My last question is, what is your message for people to not get in this situation? Do you encourage them to probably use your tool or tools like yours to tag their photos whichever are going public so that in the future or what is your overall recommendation to prevent this? I know we shouldn’t be in this ideal space but we do not live in an ideal world and what would be your recommendation?
Melissa Hutchins: How to be safe on the internet or try to be as safe as possible in the digital world is my question. Yeah. I would stay with the approach that we’re bringing into the prevention side by creating that registration like the demo that I walked you through last time which again I want to make sure to do to the listeners as well. We’re giving demos case by case and we want to make sure that people are informed of these solutions. But for sure our approach is coming from being able to register images autonomously like being able to have something working in the background similar to if you’re writing something in Google Docs you don’t have to worry about doing a save we don’t have to worry about that anymore. It autonomously saves it to the cloud and we know that our files are secure and up to date. It’s a similar approach that we’re bringing. You have essentially our tool running in the background of registering your files as your own. You are the content owner applying watermarking and hashing which is something that we do. It’s just one of the multiple layers of encryption that we’re putting on content. Essentially if your photos are posted or manipulated, those unique identifiers that happen in the registration process can immediately be surfaced and we notify you, immediately able to create that association and get it taken down as quickly as possible or give you the power to decide what your next step is. I would say in this space anyone passionate about technology and making a change like you is using technology for social good. The most important lesson I’ve learned is that meaningful impact does come from having lived experience and extreme persistence and willingness to challenge what exists. So being able to be proactive requires a certain amount of education on okay how can I have the most ownership over my content and how can I stay ahead of cyber attacks similar to how you would have cyber security software like being able to have a content provenance system like Certifi AI that manages your content and can be a proactive tool I would say is super important because you don’t like the biggest issue is you don’t want to wait similar to cyber security you don’t want to wait until it becomes an issue to take those necessary steps because by that time it’s just this frantic What do I do? And we’re still from the legal side and legislation and closing those gaps in the legal system. There’s still so much work that needs to happen there. And there’s a lot of people that find themselves in these situations and are told that there’s nothing that can be done. And for myself and survivors and advocates, that’s no longer an acceptable response. We need to be proactive especially given the pernicious nature of bad actors, making it enough of a risk, seeing people being held accountable is as crucial as safeguarding this technology. So, those are just off the top of my head, but there are a lot of things that we can continue to do in this space. But again, having the support from the tech community and reinforcing the importance of responsible AI and what is responsible innovation and what could be potentially harmful and structuring that development in a way that mitigates those risks.”
Kashyap Raibagi: On those important concluding remarks. Thank you so much Melissa for sharing your story with us. I believe that it’s very brave of you to come out to understand the sensitive nature of going through something as difficult as you have been through and then coming out not just stronger to tell your story but at the same time working towards that cause at scale with the government and I’m sure that there’s a lot of things ahead. I’m looking forward to following your work closely. Thank you for sharing that with us.
Melissa Hutchins: Just on a closing note because I know that this topic can be very heavy and I want to emphasize that I do firmly believe that there are so many amazing uses of AI and I truly believe that it’s going to help us in so many ways. And just it’s a truly transformative technology that has a lot of benefits. But to see those benefits, we can’t ignore the fact that there will always be bad actors using technology outside of its intended purpose. There are a lot of great amazing benefits just to make sure it’s done responsibly.