Years before the “Chat GPT moment,” at viewneo, we were already driving the integration of advanced systems into digital signage software and hardware. For instance, cameras with gender recognition trained on generative models were deployed to deliver the right content to the right audience, capable of distinguishing between men and women while even estimating age. However, what’s coming in the years ahead will bring a whole new dimension.

The “ChatGPT Moment”: A Turning Point in the World of Software

The so-called “ChatGPT moment” began in November 2022, when OpenAI introduced the first public model of ChatGPT. This release marked a milestone in the development of generative AI, fundamentally altering both perceptions of artificial intelligence and its applications in the software industry and beyond.

Why the generative models were revolutionary?

Before ChatGPT, there were already significant advancements in AI, particularly in machine learning, speech recognition, and computer vision. However, ChatGPT took these technologies to a new level by demonstrating the ability to conduct human-like conversations and provide context-aware responses. This breakthrough was not only technological but also societal:

  • Accessibility: For the first time, millions of people could experiment with AI in their daily lives without any technical background. Its simple user interface made it accessible not just to developers but also to everyday users.
  • Fascination and Curiosity: The ChatGPT moment sparked widespread discussions about AI’s possibilities, risks, and ethical implications.

Impacts on Software Solutions

The influence of the ChatGPT moment on software development and the tech industry has been enormous. In the digital signage sector, AI has unlocked significant potential, allowing for enhanced user experiences and streamlined processes.

AI doesn’t just change the way software is developed; it enables the creation of intelligent tools for content management that save users substantial time. Many processes could soon become entirely autonomous. When considering the exponential progress of AI, the possibilities seem limitless.

Stage I

This stage predates the ChatGPT moment. AI was primarily used for tasks like text and facial recognition. For example, camera images were analyzed, and models trained on high-performance computers could recognize characteristics such as gender, age, or specific details like “wearing a black hat.”

AI models form the backbone of such systems, trained to perform tasks based on data. These models are the heart of AI systems, enabling them to process information, identify patterns, and make decisions.

Stage II – Generative Artificial Intelligence

With ChatGPT came the breakthrough of LLMs, which stands for “Large Language Models.” These are a specific type of model designed to understand, process, and generate natural language. LLMs are a cornerstone of modern computational systems, especially in the field of generative AI, as seen in applications like ChatGPT and other language assistants. Today, these LLMs help students, freelancers, and employees around the world with their tasks. These models haven’t just learned language but have absorbed the vast amount of knowledge embedded in the training materials used to develop them. Sometimes, I think even OpenAI, the company that developed ChatGPT, was surprised by the capabilities of its product.

GPT stands for “Generative Pre-trained Transformer.” It is a specialized type of model developed by OpenAI to understand and generate human-like text. The term describes both the technology and the training concept behind these models.

For me, Stage II begins with the rise of generative AI. Previously, models were used to determine, for example, whether an image depicted a dog or a cat, or to distinguish between a man and a woman (Stage I). With generative AI, however, something entirely new emerged. The term “generative” emphasizes the model’s ability to create new content. Unlike pure analytical or classification models, GPT can generate texts based on a given input (prompt). This enables:

  • Text generation: Writing articles, stories, poems, or reports.
  • Answering questions: Generating responses based on contextual input.
  • Creativity: Producing original and often surprisingly creative content.

At viewneo, we recently introduced an AI feature for text optimization. This tool makes content creation significantly easier for users, particularly when working with text-based content.

  • Shortening texts: Imagine you have a long article that needs to be condensed into three sentences because the digital signage template has limited space. Without AI, this task could easily take 20 minutes or more. Now, with AI, it takes just a few seconds—provided you trust the AI enough to skip reading the original text. Most common models are already capable of this.
  • Optimizing texts: At viewneo, users now write their text drafts quickly, with simple, even incomplete sentences. A single click on “optimize text” generates new suggestions in seconds, improving the text with each iteration.
  • Generating texts: Similarly, with just one prompt (a line of input or command) written in clear and simple language, you can request what you need:
    “A short text of fewer than 500 characters, consisting of a title and a text block, informing about one of humanity’s greatest achievements of the 19th century. Written in a journalistic style, like that of a daily newspaper.”
    In seconds, the AI delivers a finished text. It has already decided what it considers to be the achievement of the century and includes facts about it. You can even ask ChatGPT why it made that choice and get a response. You don’t need to research much, make decisions, or worry about spelling and grammar—the result is OK to good. Just validate the facts with Google and tweak the text slightly to add a personal touch.

Generating Images

When it comes to content creation, LLMs save enormous amounts of time. But there’s even more. When people think of AI today, they often think of the many images and videos created from simple text inputs. The Pope in a white puffer jacket—you’ve probably seen it. Platforms like MidJourney use AI to generate impressive and unique images based on text prompts. When I created my first “photos,” I was amazed by the results.

Ein Bild generiert auf der Plattform Midjourney.

The prompt, or the line of text I used to create this image, was as follows:

“Gourmet burger restaurant with high-quality materials, elegant decorations, and an upscale atmosphere.”

That’s it. Just a few seconds of work. The result was an image that looks better than many of the ones I see in today’s menus. Imagine the effort it would take to create such an image otherwise, especially if you were to simulate real food photography in a studio. Studio photography, followed by Photoshop editing, could easily take an entire day. And the cost!

Stage III – The Era of Agents

This is where things get exciting. So-called AI agents have the potential to completely replace humans in the content creation process for smart digital displays. But what exactly are AI agents, and how do they work?

What Are AI Agents?

AI agents are autonomous software programs powered by artificial intelligence that can perform tasks independently. They interact with their environment, make decisions based on received data and objectives, and dynamically adapt their behavior to achieve those goals. Not only can you deploy individual agents for specific purposes, but you can also let an entire swarm of agents work collaboratively—known as agent swarms.

What Are Agent Swarms?

Agent swarms are groups of autonomous AI agents that cooperate and solve collective tasks using swarm intelligence. Each agent operates independently, following simple local rules coordinated by the group. This results in emergent behavior that enables the swarm as a whole to tackle complex problems.

Solving Complex Problems

What kind of complex problems can they solve? Let’s consider what is required to operate an AI-powered information board regarding content creation.

An editor typically needs to produce content for specific categories (e.g., corporate news for internal communication). This involves selecting potential topics or news items deemed relevant enough to be published. The editor must then research, send emails, wait for responses to gather information, search the internet, or consult with the works council on current topics. Afterward, they must write texts and select or create images.

In the future, all of this will happen autonomously, enabling intelligent signage solutions that deliver highly relevant content efficiently. By leveraging adaptive content displays, the system ensures that the output is tailored dynamically to audience preferences, eliminating manual workflows.

The Workflow of AI Agents

1. Topic Research and Selection

One of the first steps in content creation is identifying relevant topics. Within an agent swarm, this could work as follows:

  • Trend Analysis Agents: Specialized agents scan social media, news platforms, and forums to identify current trends and discussions. They analyze hashtags, Google search queries, or viral content and report potentially interesting topics to the swarm.
  • Audience Analysis Agents: Simultaneously, other agents analyze audience preferences based on data such as reading behavior or feedback on previous articles. These agents prioritize topics that are particularly relevant to the target audience.
  • Swarm Coordination: A coordinating agent aggregates the suggestions, evaluates them for relevance and urgency, and compiles a topic list for the editorial team.

2. Research and Information Gathering

Once a topic is chosen, specialized agents handle the research:

  • Data Agents: These agents extract facts, statistics, and background information from reliable sources such as academic publications, official reports, or databases.
  • Source Verification Agents: Another part of the swarm checks the credibility of sources and evaluates the data for accuracy, timeliness, and reliability.
  • Language and Context Analysis Agents: To ensure a deeper understanding, these agents analyze the context and filter out contradictory information, guaranteeing that the article is well-founded.

3. Text Generation

The actual writing process can also be handled by an agent swarm:

  • Structure Agents: These agents develop a logical outline for the article based on the researched information. They create an introduction, core paragraphs, and a conclusion.
  • Writing Agents: Specialized writing agents generate the text in various styles (e.g., factual, journalistic, or narrative) and tailor it to the target audience.
  • Tone and Style Agents: These agents ensure that the tone of the text remains consistent and aligns with the brand or publication.

4. Editing and Quality Assurance

Before publication, the text goes through several rounds of checks, all performed by agents:

  • Grammar and Spell-Checking Agents: These agents automatically identify and correct linguistic errors.
  • Plagiarism Agents: They verify that the text contains no unauthorized copies and ensure originality.
  • Fact-Checking Agents: These agents validate the claims within the text by cross-referencing them with reliable sources.

5. Content Optimization

Before publication, other agents optimize the content to maximize reach and visibility:

  • SEO Agents: These agents analyze keywords and adjust the text to improve search engine rankings.
  • Multimedia Agents: They add suitable images, videos, or infographics to make the article more engaging.
  • Formatting Agents: These agents ensure the text is appropriately formatted for various platforms (web, mobile, print).

6. Publication and Distribution

After final approval, specialized agents manage publication and promotion:

  • Publication Agents: These agents upload the article to websites, schedule social media posts, or create newsletters to distribute the content.
  • Engagement Agents: They monitor how the article performs with the audience by analyzing comments, clicks, and shares.
  • Feedback Analysis Agents: Based on reader feedback, these agents suggest topics for future articles, feeding back into the topic selection process.

The same principles apply to creating templates, managing media content, and even planning a digital signage network or selecting optimal locations for monitors or LED walls. AI agents will play a crucial role in all of these tasks.

While there’s still a bit of a road ahead, everything I’ve described above already exists—albeit with a few rough edges. Think back to the first AI-generated images. Just a few years ago, those initial images and videos couldn’t compare to what today’s AI models produce. The development has been exponential, and for some, it’s hard to grasp.

Vision: Imagine a future where traditional software interfaces are replaced with a platform where you can literally “talk” to agents. You simply explain what you need, the AI asks clarifying questions about the task, confirms its understanding, and then gets to work. This is how many solutions that currently require traditional software will function. In the future, you won’t need to create detailed prompts for individual tasks; you’ll only interact with your top “manager agent,” who will handle everything else autonomously.


Did I write this text myself? Yes, but with the diligent support of ChatGPT. Of course!

Manfred Lüdtke is the driving force behind the groundbreaking digital signage solutions at viewneo. With decades of experience in digital transformation, Manfred is dedicated to making cutting-edge technologies accessible to businesses of all sizes.

Author