Exploring Generative AI Agents Architecture Applic
Exploring Generative AI Agents Architecture Applic
Research Article
Keywords: Agent Architectures in AI, AI Task Planning and Tool Integration, Applications of AI Agents
in Industry, Ethical Challenges in Generative AI, Generative AI Agents, Large Language Models (LLMs)
*
Corresponding author: 1Inesh Hettiarachchi
Received: 15-03-2025; Accepted: 30-03-2025; Published: 16-04-2025
Copyright: © The Author(s), 2024. Published by (JAIGS). This is an Open Access article, distributed under the terms of
the Creative Commons Attribution 4.0 License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted
use, distribution and reproduction in any medium, provided the original work is properly cited.
Journal of Artificial Intelligence General Science (JAIGS)
1. Introduction
AI has undergone remarkable progress in recent years through generative AI, which uses learned data dis-
tributions to produce new relevant content. Generative AI consists of algorithms that produce data across
text generation and image and audio and video modalities where the outputs virtually match human-made
content without detection. Through these models machines generate new coherent content which builds
upon their statistical learning capabilities of large-scale datasets. The innovation boundary holds a revolu-
tionary concept named Generative AI Agents. The agents unite elements of generative capacity with auton-
omous goal pursuit as well as reasoning abilities with memory functions, planning mechanisms, and exter-
nal environmental interface capabilities. Generative AI agents distinguish themselves from basic generative
models through their ability to break down assignments independently, as well as their ability to call exter-
nal tools or APIs and, adjust their responses to changes in input data and enhance performance using self-
reflective or reinforcement-based learning. The development represents a substantial change that moved
prompt-response static systems toward decision-making and workflow execution abilities within adaptive
multi-step reasoning agents.
Throughout decades, researchers have built generative AI agents as a result of their commitment to sym-
bolic AI alongside expert systems and rule-based agents, which led to the first stages of artificial intelli-
gence development. Such initial artificial intelligence systems operated with built-in logic; however, they
faced limitations because they could not translate their training into new situations beyond their pro-
grammed guidelines. Creative pattern recognition was boosted significantly when machine learning sys-
tems matured into deep learning models that operate through data-based algorithms. Transformer-based
architectures from OpenAI, Google, and Meta made LLMs capable of fluent natural language understand-
ing and generation through their GPT family, BERT, and PaLM models, and LLaMA revises. AutoGPT and
BabyAGI, along with AgentGPT, served as Autonomous AI agents in 2023, which showed how systems
can unite sequential reasoning operations with built-in tool executions and web queries as well as API
request processes into focused objective-focused control cycles. The technology connects natural language
processing with reinforcement learning knowledge representation and planning and cognitive modeling
functions through Large Language Models (LLMs).
The study of generative AI agents requires immediate investigation because it holds critical importance in
present times. Organizations together with researchers, find the growing necessity for AI systems which
demonstrate generation capabilities while remaining autonomous, interactive, and trustworthy in critical
mission systems that include scientific discoveries and software development alongside customized educa-
tion and business automation. Real-world deployments of such agents result in urgent ethical and opera-
tional challenges regarding agent alignment along with safety requirements and interpretability limitations
to operational stability that need a structured assessment of their core architectures and operational con-
structs. AI agents now appear to pave the way toward Artificial General Intelligence (AGI) systems and
also solve complicated workflow situations humans normally complete.
The article delivers an extensive examination of generative AI agents by explaining their structural designs
and operational areas with a focus on existing technical problems and ethical considerations. Section 2
106
Journal of Artificial Intelligence General Science (JAIGS)
reviews foundational concepts and recent developments in generative AI and agent-based systems. Gener-
ative agents exhibit modular construction through their device properties, which consist of reasoning en-
gines and environment access methods, with memory management being one aspect. The fourth section
demonstrates practical industrial applications before the fifth section examines present-day barriers to adop-
tion as well as existing risks and challenges. Section 6 reviews new research directions before Section 7
delivers the summary along with suggested responsible practices for development and evaluation.
107
Journal of Artificial Intelligence General Science (JAIGS)
108
Journal of Artificial Intelligence General Science (JAIGS)
programming and loop interfaces. The system reorders tasks through continuous feedback assessment while
its objectives undergo developmental changes.
AgentGPT presents a web platform enabling users to launch self-evolving agents through customizable
goal settings. It emphasizes modularity and simplicity.
109
Journal of Artificial Intelligence General Science (JAIGS)
The theoretical framework specifies how generative AI agents unite technologies for natural language pro-
cessing together with autonomous reasoning components and environment interaction capabilities. Agent
model advancement demonstrates the emergence of an innovative system which will transform machine
interaction with challenging real-world operations.
110
Journal of Artificial Intelligence General Science (JAIGS)
LLMs operate either stateless without memory storage or link to external memory modules that strengthen
their context recognition abilities.
3.2 Memory and Long-Term Context
The effective operation of generative agents throughout extensive period work and various assignments
demands both short-term memory and long-term memory capability. Memory systems serve to store four
types of data: user preferences but also previous results of tasks and steps performed and models represent-
ing the world.
Key components include:
• Working memory functions as the temporary storage system for current session contents, includ-
ing recent conversation dialogue and present task steps.
• Long-term memory consists of stored learned behaviors as well as interaction logs, knowledge ba-
ses, and user-specific preferences.
• Embedding databases make use of vector stores including FAISS, Pinecone and Weaviate to exe-
cute semantic search and retrieval of important past content.
Current agents implement Retrieval-Augmented Generation (RAG) through which they navigate their long-
term memory to access deeper context than the LLM token limitations would otherwise support.
3.3 Task Planning and Execution
Generative agents differ from stand-alone LLMs because their planning modules break down complex tasks
into sub-tasks, which the system organizes into executable sequences while monitoring their execution path.
Popular planning mechanisms include:
• Chain-of-Thought (CoT) Reasoning
• Tree of Thoughts (ToT)
• Task Graph Planning
• ReAct (Reasoning + Acting) frameworks
Many agents including Auto-GPT and BabyAGI deploy task planning engines which generate and modify
execution schedules through mechanism-based performance inspections and plan adaptation based on result
feedback. The systems use a repetition process through which the agent continuously evaluates and modi-
fies its execution path.
3.4 Tool/Environment Interaction (APIs, Browsers, Plugins)
A generative AI agent provides distinct functionality from a conversational LLM because it enables task
execution involving external tools and environments for goal achievement.
111
Journal of Artificial Intelligence General Science (JAIGS)
Examples include:
• API wrappers (e.g., for weather, news, emails, calendars)
• The agent can perform web browsing functions that retrieve and process information in real-time
through plugins.
• Local and cloud file systems need to provide access to read and write documents through APIs.
• Generative AI agents typically operate through different code execution platforms that include Py-
thon shells and Jupyter environments.
• GUI Automation tools (e.g., Selenium, Puppeteer)
The correct function calls generation and real-time output handling alongside result-based decision making
form the core of these integrations that researchers actively study and enhance.
3.5 Feedback and Reinforcement Mechanisms
Reliability and efficiency improvements of agents come from the implementation of feedback systems
which help agents evaluate themselves as well as absorb feedback from external sources.
Common strategies include:
• The prompt asks agents to evaluate themselves by seeking answers to questions such as “Was the
last step correct?”
• Human-in-the-loop correction
• Reinforcement Learning from Human Feedback (RLHF)
• Reinforcement Learning with AI Feedback (RLAIF)
Through these mechanisms, agents can prevent hallucinations while they learn to detect errors and contin-
uously enhance their performance quality. Reinforcement-based strategies prove indispensable for building
automated systems that require long-term operation under reduced human management.
3.6 Autonomy vs. Control Trade-Offs
AI agent creation requires engineers to match human supervision with machine self-determinacy. Systems
face two opposite challenges when autonomy is either extreme or too restrictive because excessive liberty
produces unwanted results while excessive control prevents growth and usefulness in application.
Key considerations include:
• Hard-coded constraints vs. adaptive learning
• Approval gates (e.g., requiring user validation for high-risk actions)
• The system must provide defense mechanisms when agents need to justify their execution to us-
ers.
• The interface includes adjustable elements referred to as dynamic control knobs that users can
modify exploration, creativity, and execution thresholds.
112
Journal of Artificial Intelligence General Science (JAIGS)
Future autonomous systems will probably use adjustable autonomy controls to serve different application
requirements, from totally self-running researchers to assisting co-pilots.
113
Journal of Artificial Intelligence General Science (JAIGS)
Significant advances in generative AI agents led them to become practical implementation tools that now
span multiple domains. These agents operate from large language models (LLMs) while they receive plan-
ning and memory components to interface with external environments through APIs and tools. They be-
come autonomous across multiple fields for task generation and execution with evaluation capabilities.
These tools now redefine work processes, help make decisions, and transform information distribution sys-
tems. The following list details operations that generative AI agents affect the most through their imple-
mentation.
Business automation receives existing generative AI agents that operate autonomously to handle tasks along
with decision-making duties which perform standard work such as meeting note creation and email man-
agement and advanced Enterprise Resource Planning functions.
The business applications of AutoGPT AgentGPT, along with enterprise GPTs that include Salesforce's
Einstein GPT, illustrate the following capabilities:
AI agents currently use their autonomy to review support tickets while classifying priorities and topics in
order to launch pre-approved resolution workflows thus saving valuable staff time.
114
Journal of Artificial Intelligence General Science (JAIGS)
• Systems have the capability of analyzing markets and predicting trends through the combination
of news feed and financial data streams.
The use of generative AI agents speeds up three key scientific research components, including hypothesis
creation, literature assessment, and experimental blueprint development. These agents can autonomously:
• Systems should browse and condense academic research papers which link to the assigned sub-
ject.
• Generate structured experimental plans,
• The system presents fresh investigations that stem from former results.
ChatGPT from OpenAI operates with code interpreter attachments and plugin instruments to create simu-
lated laboratory settings which help Python-based statistical analysis and generate professional reports in
LaTeX format.
Generative agents achieve specific naming when paired with domain-specific LLMs, including BioGPT or
SciBERT.
• The system can extract organized medical knowledge from informal biomedical texts.
• The system conducts meta-analysis between different research studies and operates as part of a
single framework.
• The system assists researchers' efforts to locate uncharted areas because it displays research gaps.
4.3 Healthcare
Healthcare organizations use generative AI agents extensively for patient care decisions and, automated
documentation processing and patient communication activities.
• Through virtual health assistants, patients can book their sessions and receive responses to their
queries while the assistants gather information ahead of consultations.
• The diagnostic support agent examines symptoms either transmitted directly from patients or re-
trieved from Electronic Health Records for physician budgeted diagnosis assessment.
Medical applications developed by Google with Med-PaLM combined with Nuance DAX Copilot along
with Babylon Health's AI system demonstrate the capabilities of agents to perform these functions.
115
Journal of Artificial Intelligence General Science (JAIGS)
The system's agents do not fully substitute clinical personnel, yet they significantly decrease mental work-
load and let medical staff center their actions on main diagnostic procedures.
Generative AI agents have become effective creative partners in establishing the creative economy. These
agents in addition to content generation assist creative industries that include writing and music as well as
game design and digital arts with continuous refinement of content and thematic development and stylistic
adjustments.
For example:
• Writers employ Sudowrite or NovelAI as well as other generative agents to develop their plots
while ensuring writing style consistency.
• The sound production tools offered by AIVA and Amper Music supply musicians with free music
that fits certain thematic requirements.
• Game developers use generative agents to add them to NPC behavior scripts and, dialogue sys-
tems and environment design instruments.
OpenAI's GPT-4, with vision and audio capabilities, represents the next generation of creative agents be-
cause it understands and produces both visual and auditory content from prompts or context.
Through generative AI agents, education demonstrates the remarkable potential to supply personalized ed-
ucational experiences that adapt themselves to individual student needs at a large scale.
Examples include:
• Student performances are able to guide intelligent systems that provide specialized educational
support.
• The agents generate customized curriculum content.
• These bots construct quizzes through automated assessment while generating reason-based feed-
back for student assessments.
116
Journal of Artificial Intelligence General Science (JAIGS)
Khan Academy demonstrates this capability through their (GPT-powered agent, Khanmigo) while OpenAI
likewise demonstrates this through their GPT-based educational companions in their offerings.
• Programs should present complicated information in versions suitable for students, depending on
their age and their present understanding level.
• Recommend learning paths dynamically,
• A combination of Socratic questioning and concept-deep exploration tools should be included.
Agents serve as tools to support language acquisition, STEM education, and professional enhancement,
which creates equal opportunities for high-quality mentorship.
117
Journal of Artificial Intelligence General Science (JAIGS)
118
Journal of Artificial Intelligence General Science (JAIGS)
• Inference latency and computational cost increase with larger model sizes.
• Research remains active on the issue of how to maintain survival of both long-term memory and
persistent task states between sessions.
• The management of multiple sub-agents and tools while increasing the complexity levels of agent
behavior systems becomes considerably more complex due to scalability requirements.
Research teams are adopting vector databases and dynamic context windows together with retrieval-aug-
mented generation technology, but complete, scalable agent solutions require additional development.
5.2 Ethical & Social Challenges
5.2.1 Bias and Fairness
Generative agents obtain their built-in biases from the datasets from which they received their training data
and magnify those biases when operational. These biases can manifest in:
• Discriminatory or culturally insensitive outputs.
• Decisions based on prejudice occur within screening processes for candidates and lending deci-
sions.
• Generative agents have shown a tendency to endorse gendered racial and political stereotypes.
The capability for autonomous execution of such agents creates a need for bias mitigation as a basic re-
quirement prior to deployment that ensures societal safety. The successful removal of human bias from
these systems requires adversarial debiasing procedures together with carefully selected training samples
while implementing human supervisor systems.
5.2.2 Agent Misuse
The freedom to act independently alongside creative features found in agents creates conditions for which
they can be exploited by unscrupulous users. Examples include:
• The system automatically produces spam or phishing material.
• Disinformation efforts through computational bots work together to spread false information.
• Deepfake creation or impersonation in social and political contexts.
Generative agents function seamlessly between different systems while linking with application program-
ming interfaces and automating complicated workflows because of these features thus they execute dan-
gerous operations efficiently with minimal supervision which creates digital trust and cybersecurity risks.
5.2.3 Explainability and Transparency
Current versions of generative agents suffer from an inability to show their operational logic to users. These
agents work as opaque systems with a black box operating mechanism because they make it difficult to:
• A person should understand the exact reason behind each selected action.
• Generative agents require the capability to predict their conduct during evolving situations.
• Trace the origin of errors or hallucinations.
119
Journal of Artificial Intelligence General Science (JAIGS)
Generative agents suffer from lack of transparency which makes their applications problematic in financial
and military and healthcare sectors where regulations exist. The current efforts to enhance explainability
by showing attention visualizations and providing decision tracing and chain-of-thought prompts show
promise although they remain primitive methods.
5.3 Regulatory Issues and Data Privacy
When generative agents are deployed, they introduce regulatory compliance challenges and privacy con-
cerns to the system.
• When agents receive training on special data assets that belong to their organization, they may in-
fringe upon their intellectual property restrictions.
• Customers who continuously use generative agents create concerns about consent to data usage
and the handling and distribution of that data.
• Autonomous agents operating within jurisdictions with intense AI governance duty such as the
EU's AI Act together with GDPR rules must adhere to precise transparency guidelines and safety
protocols and accountability mandates.
The operation of agents who integrate third-party tools and APIs or browsers requires them to fulfill nu-
merous compliance requirements in real time because of their complex nature. Standardized regulatory
frameworks are required to simplify global deployment but such frameworks are not yet established.
120
Journal of Artificial Intelligence General Science (JAIGS)
ethically correct generative AI systems. These agents must extend their capability scale but they need to
achieve deeper cognitive understanding through interpretability and sustainable systems development.
6.1 Memory Persistence and Lifelong Learning
The main constraint in current generation agents arises from their inability to retain memory across time
and accumulate knowledge through lifelong learning. Agents that operate today mostly rely on short-term
memory windows that cause them to lose track of mission goals and past interactions, thereby preventing
them from using historical knowledge for learning. Current vector databases alongside retrieval-augmented
generation systems provide limited solutions but do not meet the standard of adaptive episodic memory,
which improves over time.
More research needs to develop mixed memory systems which unite neural embeddings and symbolic
memory graphs to help artificial agents acquire knowledge through persistent learning for diverse tasks.
Agents become more adaptable when added self-reflective features permit them to evaluate former choices
followed by behavioral adjustments.
6.2 More Autonomous Reasoning and Meta-Cognition
Modern generative AI agents need the reasoning power of pre-trained large language models, but they do
not have self-directed autonomous reasoning abilities. The current practice of multi-step reasoning tasks
depends mainly on hardcoded approaches, either through prompt engineering methods or chain-of-thought
patterns that prove ineffective across different contexts.
The study of meta-cognition in agents has become a subject of investigation in new research because agents
developed with this ability perceive their reasoning abilities and make choices about when to search for
information and modify objectives or doubt assumptions. Agent development now produces recursive self-
improvement frameworks that use human cognitive principles together with control theory concepts.
Key areas of research include:
• Agents implement model-of-self frameworks that enable the monitoring of their performance ob-
jectives and identify limitations.
• Uncertainty modeling in decision-making (Bayesian reasoning, Monte Carlo dropout, etc.).
• The coordination between multiple agents functions similarly to swarm intelligence by having
them collaborate as a group.
121
Journal of Artificial Intelligence General Science (JAIGS)
• Adaptations originate from user activities combined with their targets and listening channel feed-
back.
• Credential display and rational reasoning explanations act as methods for agents to share their
technical self-assurance and reasoning processes.
• The concept of shared autonomy allows humans and agents to jointly operate tasks through seam-
lessly changing control authority points between each other.
Longitudinal research needs to be conducted because it will evaluate agent behavior over time through
analyses of learning dynamics alongside user fulfillment with ethical conformity. Universal standards for
benchmarking need establishment because they ensure equitable assessment and support validation for
safety and adherence to regulations.
7. Conclusion
The article examines Generative AI Agents through an exhaustive examination of their structural designs,
useful implementations, and important obstacles. The agents show significant progress compared to tradi-
tional AI systems because they combine planning and memory, tool usage, and autonomous capabilities
into unified systems that perform complex operations independently. Key insights include:
• The framework within generative agents comprises four essential components: computational de-
vices, memory processing elements, design creation capabilities, and interface functionality for
device integration.
• Technology agents serve diverse needs in various fields, which include business process manage-
ment, scientific research, and individualized educational programs.
122
Journal of Artificial Intelligence General Science (JAIGS)
• Multiple essential technical and ethical issues must be addressed because the systems produce hal-
lucinations and show limited reasoning while facing bias problems and require explanation capa-
bility alongside regulatory frameworks.
Persistence memory systems, along with meta-cognitive reasoning abilities and human-agent collaboration
methods, will establish the fundamental building blocks for forging true intelligence in artificial systems.
Determined evaluation mechanisms that simulate actual operational activities and protection protocols must
be developed as a pressing necessity.
The proper advancement of generative AI agents through responsible methods creates possibilities to extend
human capacity while enabling automated thinking for creative activities and solutions to challenging
worldwide issues. The implementation needs ethical design safeguards alongside complete transparency
and safety protocols to function correctly. This transformative period will allow human intelligence to work
alongside generative agents, which will change the ways artificial intelligence behaves in its frontiers.
References
[1] Acion, L., Rajngewerc, M., Randall, G., & Etcheverry, L. (2023, November 1). Generative AI poses
ethical challenges for open science. Nature Human Behaviour, Nature Research. https://doi.org/10.1038/s41562-023-
01740-4
[2] Alasadi, E. A., & Baiz, C. R. (2023). Generative AI in Education and Research: Opportunities, Con-
cerns, and Solutions. Journal of Chemical Education, 100(8), 2965–2971.
https://doi.org/10.1021/acs.jchemed.3c00323
[3] Berdiyeva, O. B., Umar, M. U. I., & Saeedi, M. S. (2023). Artificial Intelligence in Accounting and
Finance: Meta-Analysis. NUST Business Review, 3(1). https://doi.org/10.37435/nbr.v3i1.29
[4] Bolotta, S., & Dumas, G. (2022). Social Neuro AI: Social Interaction as the “Dark Matter” of AI.
Frontiers in Computer Science, 4. https://doi.org/10.3389/fcomp.2022.846440
[5] Bordini, R. H., El Fallah Seghrouchni, A., Hindriks, K., Logan, B., & Ricci, A. (2020). Agent pro-
gramming in the cognitive era. Autonomous Agents and Multi-Agent Systems, 34(2). https://doi.org/10.1007/s10458-
020-09453-y
[6] Chiu, T. K. F. (2024). Future research recommendations for transforming higher education with gen-
erative AI. Computers and Education: Artificial Intelligence, 6.
https://doi.org/10.1016/j.caeai.2023.100197
123
Journal of Artificial Intelligence General Science (JAIGS)
[7] Cusumano, D., Boldrini, L., Dhont, J., Fiorino, C., Green, O., Güngör, G., … Indovina, L. (2021).
Artificial Intelligence in magnetic Resonance guided Radiotherapy: Medical and physical considerations
on state of art and future perspectives. Physica Medica, 85, 175–191. https://doi.org/10.1016/j.ejmp.2021.05.010
[8] Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … Wright, R. (2023).
“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implica-
tions of generative conversational AI for research, practice and policy. International Journal of Infor-
mation Management, 71. https://doi.org/10.1016/j.ijinfomgt.2023.102642
[9] Farrelly, T., & Baker, N. (2023, November 1). Generative Artificial Intelligence: Implications and
Considerations for Higher Education Practice. Education Sciences. Multidisciplinary Digital Publishing
Institute (MDPI). https://doi.org/10.3390/educsci13111109
[10] Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language mod-
els. First Monday, 28(11). https://doi.org/10.5210/fm.v28i11.13346
[11] Groshev, M., Guimaraes, C., Martin-Perez, J., & De La Oliva, A. (2021). Toward Intelligent Cyber-
Physical Systems: Digital Twin Meets Artificial Intelligence. IEEE Communications Magazine, 59(8),
14–20. https://doi.org/10.1109/MCOM.001.2001237
[12] Hamon, H. (2025). Homeostatic Mechanisms in Unsupervised Learning: Enhancing Sparse Coding
through Nonlinear Normalization. Journal of Artificial Intelligence General Science (JAIGS), 8(1), 35–
43. https://doi.org/10.60087/jaigs.v8i1.326
[13] Hashem, R., Ali, N., Zein, F. E., Fidalgo, P., & Khurma, O. A. (2024). AI to the rescue: Exploring
the potential of ChatGPT as a teacher ally for workload relief and burnout prevention. Research and
Practice in Technology Enhanced Learning, 19. https://doi.org/10.58459/rptel.2024.19023
[14] Hayes-Roth, B. (1995). An architecture for adaptive intelligent systems. Artificial Intelligence, 72(1–
2), 329–365. https://doi.org/10.1016/0004-3702(94)00004-K
[15] Huynh-The, T., Pham, Q. V., Pham, X. Q., Nguyen, T. T., Han, Z., & Kim, D. S. (2023, January 1).
Artificial intelligence for the metaverse: A survey. Engineering Applications of Artificial Intelligence.
https://doi.org/10.1016/j.engappai.2022.105581
[16] Iorliam, A., & Ingio, J. A. (2024). A Comparative Analysis of Generative Artificial Intelligence
Tools for Natural Language Processing. Journal of Computing Theories and Applications, 1(3), 311–325.
https://doi.org/10.62411/jcta.9447
124
Journal of Artificial Intelligence General Science (JAIGS)
[17] Jha, N. N., & Manwani, P. Self-healing payment systems via AI-driven anomaly recovery: A zero-
downtime framework for secure and reliable transactions. Technology (IJCET), 16(2), 200–212.
https://doi.org/10.34218/IJCET_16_02_014
[18] Kadaruddin, K. (2023). Empowering Education through Generative AI: Innovative Instructional
Strategies for Tomorrow’s Learners. International Journal of Business, Law, and Education, 4(2), 618–
625. https://doi.org/10.56442/ijble.v4i2.215
[19] Khadija, A., Zahra, F. F., & Naceur, A. (2021). AI-Powered Health Chatbots: Toward a general ar-
chitecture. In Procedia Computer Science (Vol. 191, pp. 355–360). Elsevier B.V.
https://doi.org/10.1016/j.procs.2021.07.048
[20] Lieto, A., Bhatt, M., Oltramari, A., & Vernon, D. (2018, May 1). The role of cognitive architectures
in general artificial intelligence. Cognitive Systems Research. https://doi.org/10.1016/j.cogsys.2017.08.003
[21] Manwani, P. Federated learning for cross-bank fraud defense. Technology (IJCET).
https://doi.org/10.34218/IJCET_16_02_016
[22] Nazari, T., Ezzati, E., Rasekh, H. R., & Gharib-Naseri, Z. (2023). Application of Artificial Intelli-
gence in the Pharmaceutical Industry. Health Technology Assessment in Action, 7(4).
https://doi.org/10.18502/htaa.v7i4.14656
[23] Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: what it is and
how it works. AI and Society, 39(4), 1871–1882. https://doi.org/10.1007/s00146-023-01635-y
[24] Oniani, D., Hilsman, J., Peng, Y., Poropatich, R. K., Pamplin, J. C., Legault, G. L., & Wang, Y.
(2023). Adopting and expanding ethical principles for generative artificial intelligence from military to
healthcare. NPJ Digital Medicine, 6(1). https://doi.org/10.1038/s41746-023-00965-x
[25] Papadopoulos, G. T., Antona, M., & Stephanidis, C. (2021). Towards Open and Expandable Cogni-
tive AI Architectures for Large-Scale Multi-Agent Human-Robot Collaborative Learning. IEEE Access,
9, 73890–73909. https://doi.org/10.1109/ACCESS.2021.3080517
[26] Prieto, I., & Blakely, B. (2024). Proposed Uses of Generative AI in a Cybersecurity-Focused Soar
Agent. Proceedings of the AAAI Symposium Series, 2(1), 386–390. https://doi.org/10.1609/aaaiss.v2i1.27704
[27] Rasul, T., Nair, S., Kalendra, D., Robin, M., Santini, F. de O., Ladeira, W. J., … Heathcote, L.
(2023). The role of ChatGPT in higher education: Benefits, challenges, and future research directions.
Journal of Applied Learning and Teaching, 6(1), 41–56. https://doi.org/10.37074/jalt.2023.6.1.29
125
Journal of Artificial Intelligence General Science (JAIGS)
[28] Reis, J. (2023). Exploring Applications and Practical Examples by Streamlining Material Require-
ments Planning (MRP) with Python. Logistics, 7(4). https://doi.org/10.3390/logistics7040091
[29] Rella, B. P. R. Comparative Analysis of Data Lakes and Data Warehouses for Machine Learning.
International Journal of Future Modern Research, 7(2). https://doi.org/10.36948/ijfmr.2025.v07i02.38869
[30] Renjith, P. N., Alfurhood, B. S., Prashanth, K. V., Patil, V. S., Sharma, N., & Chaturvedi, A. (2024).
Coordination of modular nano grid energy management using multi-agent AI architecture. Computers and
Electrical Engineering, 115. https://doi.org/10.1016/j.compeleceng.2024.109112
[31] Rodriguez, H. C., Rust, B., Hansen, P. Y., Maffulli, N., Gupta, M., Potty, A. G., & Gupta, A. (2023,
September 1). Artificial Intelligence and Machine Learning in Rotator Cuff Tears. Sports Medicine and
Arthroscopy Review. https://doi.org/10.1097/JSA.0000000000000371
[32] Salah, M., Al Halbusi, H., & Abdelfattah, F. (2023). May the force of text data analysis be with you:
Unleashing the power of generative AI for social psychology research. Computers in Human Behavior:
Artificial Humans, 1(2), 100006. https://doi.org/10.1016/j.chbah.2023.100006
[33] Samala, N., Katkam, B. S., Bellamkonda, R. S., & Rodriguez, R. V. (2022). Impact of AI and robot-
ics in the tourism sector: a critical insight. Journal of Tourism Futures, 8(1), 73–87. https://doi.org/10.1108/JTF-07-
2019-0065
[34] Schachner, T., Keller, R., & Wangenheim, F. V. (2020, September 14). Artificial intelligence-based
conversational agents for chronic conditions: Systematic literature review. Journal of Medical Internet
Research. https://doi.org/10.2196/20701
[35] Su, J., & Yang, W. (2023). Unlocking the Power of ChatGPT: A Framework for Applying Genera-
tive AI in Education. ECNU Review of Education, 6(3), 355–366. https://doi.org/10.1177/20965311231168423
[36] Toufiq, M., Rinchai, D., Bettacchioli, E., Kabeer, B. S. A., Khan, T., Subba, B., … Chaussabel, D.
(2023). Harnessing large language models (LLMs) for candidate gene prioritization and selection. Jour-
nal of Translational Medicine, 21(1). https://doi.org/10.1186/s12967-023-04576-8
[37] Tortora, L. (2024). Beyond Discrimination: Generative AI Applications and Ethical Challenges in
Forensic Psychiatry. Frontiers in Psychiatry, 15. https://doi.org/10.3389/fpsyt.2024.1346059
[38] Tsai, M. L., Ong, C. W., & Chen, C. L. (2023). Exploring the use of large language models (LLMs)
in chemical engineering education: Building core course problem models with Chat-GPT. Education for
Chemical Engineers, 44, 71–95. https://doi.org/10.1016/j.ece.2023.05.001
126
Journal of Artificial Intelligence General Science (JAIGS)
[39] Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access
to Generative AI. Economics and Business Review, 9(2), 71–100. https://doi.org/10.18559/ebr.2023.2.743
[40] Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge
Engineering Review, 10(2), 115–152. https://doi.org/10.1017/S0269888900008122
[41] Yan, B., Li, K., Xu, M., Dong, Y., Zhang, Y., Ren, Z., & Cheng, X. (2024). On Protecting the Data
Privacy of Large Language Models (LLMs): A Survey. arXiv. http://arxiv.org/abs/2403.05156
[42] Ye, P., Wang, T., & Wang, F. Y. (2018). A survey of cognitive architectures in the past 20 years.
IEEE Transactions on Cybernetics, 48(12), 3280–3290. https://doi.org/10.1109/TCYB.2018.2857704
127