DEV_DOSSIER

WEIRD AI: A PEEK BEHIND THE CURTAIN
ON THE DEV
OF OUR MODEL

Nothing gets us up in the morning like charting uncharted territory. That’s our coffee. Being creative-types, we’re at our best when building. Especially when it’s something unexpected and new. It’s a rare case of global consensus, but people have no idea what the future holds for artificial intelligence. Generative ai makes it a little easier to imagine, thanks to the accessible nature of the tools. We can transform our bounty of digital data and construct something new, whether code or term paper or a celebrity non-endorsement (cough). Fun. Sometimes pretty useful. Not always accurate. And very much not the whole story of machine learning. There’s another branch on the map with the potential to be even more powerful.

Here’s what we’re doing with discriminative ai. Small hint: we’re not discriminating. In fact, we’re doing quite the opposite. Giving an ear to the voices shared across the digital landscape we now live in. Generative models learn underlying patterns to generate new instances resembling their training data. In contrast, discriminative models are all about classification and labeling. It has the ability to differentiate between categories and make predictions.

Media teams have been using this model for awhile in programmatic advertising, analytics and optimization. We may not love the spammy, noisy output of it all. But there’s no denying that it’s more accurate and measurable. That’s because these models are supervised, meaning they need labeled data for training to ensure the output actually answers the desired request.

We’ve been asking ourselves how to use this technology since IBM Watson won Jeopardy. And the answer came with an insanely cool new neural network architecture called a ‘transformer’. It’s the T in GPT. The breakthrough was called a Self-Attention Mechanism that allowed it to weigh the importance of different elements in a sequence when making predictions. Before, we could pick out language indicators, like a really advanced keyword search, that powered tools like sentiment analysis. But now? We have a model that can actually learn language and context.

That’s when we understood what was truly possible.

Advertising is, at its core, no more than sales at scale. You have a conversation. You read the room. Look for cues and an opening. And close the deal. Advertising has always used an army of tactics to compensate for the fact that we aren’t in the room with our audience. But now we can be. We can find common ground with a vast supply of human insight to understand how we can make a stronger case.

It's one thing to have an idea and another bringing it to life. We charged our development team with the immortal words of Weird Al Yankovic, who wrote in his seminal piece: Word Crimes, “You’ll learn the definitions of nouns and prepositions. Literacy’s your mission. And that’s why I think it’s a good time.” They asked us if we knew how to code Python. We understood the subtext and got back to task.

But setting up a model and the corresponding data requires an enormous amount of legwork. Trial and error. Questions. Learning. Unlike a number of new generative ai tools, we were building a supervised model. We can’t just adapt an existing model or reskin a chatbot. We needed to start from scratch. Auditing LLMs. Identifying label requirements. Researching data sources and APIs.

Since we began, we’ve identified our tech stack. Engaged our developer. We’ve spent countless hours on calls and planning sessions to identify the system requirements, decide on our input and output goals, consider data storage and establish our labeling parameters. We’ve dug into UX/UI. We built lists. Oh so many lists. And we’ve seen the fruits of our labors take shape.

We’re charting a strange new world with our model. And we’re excited to share. In that spirit, we will continue to share our story as we evolve.