AccueilEnglishAI “productivity” is a trap unless companies build shared rules, says Carmen...

AI “productivity” is a trap unless companies build shared rules, says Carmen Torrijos

Everybody loves the idea of AI making them faster. Fewer people love the part where it forces an organization to get its act together.

Carmen Torrijos, head of AI at the Spanish communications firm Prodigioso Volcán, has a blunt message: if the “productivity” you’re getting from generative AI is just a bunch of lone wolves cranking out drafts faster, you’re doing it wrong. The real payoff comes when the whole shop shares tools, standards, and accountability. Otherwise you’re not saving time, you’re just moving the mess downstream.

Torrijos isn’t a Silicon Valley coder cliché, either. Born in Cuenca, Spain, in 1988, she started in technical translation and philology, then pivoted into computational linguistics. She’s spent more than a decade working on natural language processing (NLP), language models, and AI for communication. She co-wroteLa primavera de la inteligencia artificialwith José Carlos Sánchez, landed on a Forbes list of 100 creative business profiles, and picked up an AI Network award for her early-career trajectory. The résumé is shiny. Her argument is sharper: AI is language, and language is power, over narratives, access to information, and how teams actually work.

Generative AI runs on language, and language is never “neutral”

Generative models write, summarize, rephrase, classify, and pull facts out of documents. That can feel like pure engineering. Torrijos calls nonsense on the idea that it’s neutral.

The raw material here is language: ambiguous, culturally loaded, full of subtext. That’s where computational linguistics earns its keep, turning human intent into something a system can handle, then checking whether the output is actually useful, readable, and faithful to what the human meant.

As companies roll out writing assistants, monitoring tools, internal search, and chatbots, the value shifts to people who can sit between editorial judgment, engineering constraints, and business needs. Installing the tech is the easy part. The hard part is deciding the rules: who can use it, for what, with what review, and who’s responsible when it goes sideways.

And in a communications-heavy business like Prodigioso Volcán, the stakes are immediate: speed without sloppiness, standardization without turning every brand voice into beige oatmeal, automation without erasing what makes the work distinctive.

“Faster drafts” can mean slower everything else

Here’s the dirty secret of AI productivity: a draft produced in 30 seconds can cost you hours if nobody planned for fact-checking, editing, legal review, or alignment with the company’s editorial line.

Torrijos pushes for an end-to-end chain: clear use cases, tool selection, validation protocols, and training. Without that architecture, AI doesn’t reduce work, it relocates it. Usually onto the people who already have too much to do.

She also points out something most companies ignore until it bites them: organizations generate language nonstop, emails, memos, decks, reports, marketing copy. AI can help structure and reuse that textual “capital,” but only if governance is clear. Treat these systems like internal commons: shared references, defined access, named owners. Otherwise the value stays trapped in silos and everyone keeps reinventing the wheel.

How Prodigioso Volcán tries to “industrialize” AI without wrecking quality

Torrijos describes her job less like tinkering in a lab and more like running a cultural and digital change program. “Culture” is the key word, because adopting language models changes habits, definitions of quality, how expertise is valued, and how tasks get divided.

Her version of collective productivity is unglamorous and practical: shared prompt templates, documented use cases, defined levels of review, and structured cross-team debriefs so lessons don’t die in somebody’s private notebook.

If every employee experiments alone, the learning curve resets a hundred times, mistakes multiply, and legal risk becomes a game of whack-a-mole.

She also wants real metrics, not vibes: time saved by task type, correction rates after generation, client satisfaction impact, fewer revision cycles. Measure the workflow before and after. Identify what’s actually automatable. Otherwise the AI debate turns into ideology, true believers vs. skeptics, while the business drifts.

And yes, she worries about style flattening. AI loves the middle of the road. Her fix isn’t banning the tool; it’s drawing a bright line around what stays human: intent, angle, information hierarchy, tone, and responsibility. AI can assist production. It shouldn’t be treated as the author. That means editorial rules and approval circuits that protect a publication’s or brand’s voice.

Bias isn’t a bug you toggle off, it’s a governance problem

Torrijos goes straight at bias. Language models learn from massive corpora, meaning they ingest the world’s stereotypes, power imbalances, and uneven visibility. Bias shows up in how jobs are portrayed, how social groups are described, which sources get implicitly “trusted.” In communications work, those failures don’t stay hidden for long.

Fixing bias isn’t just tweaking a parameter. It’s deciding what’s acceptable, what gets corrected, and who gets to decide. If a generated text repeats a stereotype, who owns it, the vendor, the employee, or the company that deployed the system? Her answer: you need guardrails because AI can scale bad phrasing faster than any human ever could.

She also argues for ranking use cases by risk. Rephrasing an internal memo isn’t the same as generating public-facing guidance, recommendations, or customer responses. Mature organizations classify tasks by criticality, require stronger validation for sensitive outputs, and document known limitations. People like Torrijos end up translating between engineers, lawyers, communicators, and leadership, because somebody has to.

Then there’s the “truth” problem. These models produce plausible text, not automatically verifiable facts. In news and communications, that means procedures: demand sources, cross-check, separate synthesis from claims of fact. Productivity is worthless if reliability collapses.

Ads inside AI answers? That’s not a feature, it’s a leash

Torrijos flags a looming issue: advertising baked into AI responses. We’ve seen this movie. Search engines became the front door to information, and monetization reshaped what people saw. With chat-style assistants, the risk gets nastier because the output arrives as a sentence, casual, confident, and easily mistaken for neutral advice.

The ethical problem is obvious. The cognitive problem is worse: conversational answers feel authoritative. Slip a paid suggestion into that voice and users may swallow it as a straight recommendation.

For companies, this turns into a sovereignty question: do you really want a core writing-and-knowledge tool doubling as someone else’s monetization interface?

Her prescription circles back to governance and internal control, choosing solutions with clear usage terms, data control, and no ad incentives; setting sourcing rules for what knowledge bases are allowed; controlling outbound links. If you don’t set those boundaries, platform logic sneaks into professional workflows.

And if the conversational interface starts replacing the open web, pluralism takes a hit. Answers can quietly favor partners, paid placements, or content optimized for visibility. As a language specialist, Torrijos worries about the subtle part: the phrasing itself can steer decisions while the selection mechanism stays invisible.

From a 2013 career “accident” to the rise of hybrid language-and-tech roles

Torrijos says she fell into computational linguistics almost by accident in 2013, after technical translation work at an innovation center. Back then, NLP was already real, but most people didn’t feel it in daily life. Her path shows how text-first careers, translation, philology, can lead straight into technical roles once you’re working with corpora, tools, and models.

That’s also where the job market is heading. Companies want people who understand language as a system and can handle automation constraints, especially in industries where text is the raw material: media, communications, customer service, legal, HR.

The awards and Forbes mention are a signal of the moment: AI for communication isn’t a side project anymore. It’s tied to reputation, customer relationships, compliance, and production speed. And while a model can spit out grammatically correct sentences all day, relevance, nuance, and context still need human steering.

Torrijos’ pitch is pragmatic: don’t treat AI as a universal replacement. Treat it as a lever that needs shared methods, systems everyone can use, broad training, measurement, and serious governance around bias, reliability, and commercial dependence. That’s not an “AI tool rollout.” That’s a management decision.

Louise Lamothe
Louise Lamothe
Bibliophile et accro aux infos en tout genre, Louise aime partager ses découvertes aux travers de ses articles.

News

Coups de cœur