Boosting Product Engineering with Language Models like ChatGPT

Boosting Product Engineering with Language Models like ChatGPT

We published an article on ChatGPT in Dec’22 with the help of the AI tool. Despite numerous ChatGPT blogs and tutorials on the internet, we’ve decided to publish another one, this time written completely by a human and not an AI tool, for two reasons. One, we can still write, and two, we wanted to write on what we think about AI Tools and not on what the AI tool thinks we should think. As with our other blogs, this one too looks at ChatGPT and other similar LLM (Large Language Models) tools from a product engineering perspective. Here it is!

The current abilities of AI language models perplex many. They generate large amounts of sensible text in a jiffy and astonishingly in many styles. However, we now know that the tools are not (yet) very reliable. They are not humble enough to accept and clarify gaps in their knowledge but fill them with false information, made-up facts, contradictory details, false sources, etc.

The scariest part is they present them convincingly, with great confidence, and in good language. Many see this as a big problem. Some are scared that the worst possible problem that they think AI can create, is imminent. However, it’s time innovators take responsibility, understand the limits, and see them as an opportunity to create better tools.

Here are a few cues on why we should start implementing ChatGPT in various domains, how to understand its limitations, and what can we do to minimize its undesirable side effects.

AI in Instructional Design

Educators seem to be impressed with LLMs’ ability to explain concepts in simple and lucid language. They don’t seem to replace teachers, as not everything it generates is correct, but many teachers are already using ChatGPT in their teaching. Khan Academy, a popular e-learning platform, is already testing it in its product to explain concepts using meaningful conversations customized to every learner.

Have you ever struggled to understand the concept of infinity from a lecture or the orientation of earth’s axis as it revolves around the sun? The good news is AI tools like ChatGPT can reduce the load on teachers by taking the role of a good teacher and more importantly being always available for students.

Very soon, we will see more advanced EdTech applications, that can help learners understand even difficult concepts in simple language, evaluate their progress, and help them improve.

The same is true, for training employees and preserving knowledge within a company. AI tools can be used to train new employees irrespective of their knowledge level and help them get acquainted with the work systems and fulfill their responsibilities. This might particularly help when the learner is shy and refrains from discussing learning gaps for fear of judgment.

AI tools are also useful in (co-)creating documentation for code or products created. Thus, developers need not worry too much if their language skills are as good as their programming skills. The tools can be used to preserve knowledge within the company and not lose it when employees change jobs. They can be also used to check algorithms for logical fallacies and if the innovations align with the company’s vision.

ChatGPT4 is better than GPT3 in generating more accurate responses. It can read images better and can summarise large amounts of text. This feature is especially useful when products and platforms grow bigger, and documentation is too heavy to understand.

Need for Reliable AI Checkers

On the other side, educators are concerned about students using ChatGPT to answer their essays and pass their assessments. ChatGPT doesn’t make our language skills redundant. In fact, to get the most out of AI tools too, one needs good writing and comprehension abilities. Currently, no AI tool is good at detecting AI-generated content. The detection tools are not improving as fast as the AI-LLM ones like ChatGPT and Bard. With minimal text changes, one can pass the checks easily.

AI-Generated content is better detected manually when the tools make errors or give false facts. While some universities may go back to paper submissions, or digital submissions from a computer that’s not connected to the internet, what is a possible solution to check submissions on e-learning platforms like Coursera, EdX, etc?

Not just educators, many artists are currently pondering if their art will lose value in the face of advanced AI tools that can copy their style and create works like theirs easily and in very less time. We do not know if online assessments are reliable anymore.

We need reliable AI checkers that grow along with AI tools. The tools should be able to check not just text but images and videos. AI Tools are said to be trained on morality by contract employees from poor countries at the cost of their mental health. As AI tools are accepted and applied widely, they need to be trained meticulously. We need novel technology solutions that do not cost our health and can train AI tools to make them more reliable.

AI Tools and Digital Transformation

There’s no doubt that AI Chatbots are widely applied in software applications of diverse domains. Beyond their obvious uses, as efficient language models, tools like ChatGPT are likely to disrupt many more sectors that employ careful wordplay, negotiations, and explanations such as legal-tech, prop-tech, Ed-Tech, etc.

The tools are great at generating many ideas and will find applications in many creative fields like arts, filmmaking, animations, writing, etc. Being good at recognizing patterns and deviations from them, AI tools are promising advanced disease-predicting systems, public policy research, climate change management, disaster predictions, etc.

No, AI can’t Replace Humans

Many of the common concerns of unreliable AI tools are very much true. However, they can’t replace humans though they can outperform humans in certain tasks. Just like calculators and computers, AI tools are meant to simplify our tasks, and can’t replace us, because humans are varied and so are their intelligences. AI is good at processing data, identifying patterns, and putting knowledge in good words, however, it can’t smell or taste or feel touch. It lacks our sensory knowledge. Its empirical knowledge is based on and is as good as the data we feed.

That’s not all, grouping knowledge and forming causal relationships are not the same as acquiring a large vocabulary. The language and the relationship of disconnected ideas form the basis of reinforced learning. The patterns in our biological knowledge systems are too complex that even humans can’t identify them accurately yet.

Though AI can give us a peek into how those patterns are formed, they can’t form them as well as a human mind. AI tools are being adapted for many domains. However, we must accept that the moral and ethical concerns related to their widespread use are very much true. It is therefore important that innovators tread carefully and in the right direction. Here are some directions we think AI will grow soon. However, please note that this article is not exhaustive.

As I type these words, I imagine innovators from across the world brainstorming for new ideas on AI implementations. Are you keen on giving your products and platforms new AI capabilities? Let’s discuss the possibilities and build ethical AI innovations.

For more details

    Divya Prathima

    The author was a java Developer at coMakeIT before turning into a stay-at-home-mom. She slowed down to make art, tell stories, read books on fiction, philosophy, science, art-history, write about science, parenting, and observe technology trends. She loves to write and aspires to write simple and understandable articles someday like Yuval Noah Harari. We are very happy to have her back at coMakeIT and contribute to our relevant and thought provoking content.

    All author posts