CBC-04 Capstone Project
CBC-04 Capstone Project
AND REPORTING
A PROJECT REPORT
Submitted by,
PRESIDENCY UNVIERSITY
BENGALURU
This is to certify that the project report titled “FAKE SOCIAL MEDIA PROFILE
DECTECTION AND REPORTING” being submitted by “M. Ravi Shankar
Prasad, CH. Nagapavan, P. Akshay kumar, S. Nagesh” bearing roll number
“20211CBC0019, 20211CBC0060, 20211CBC0034, 20211CBC0017” in partial
fulfilment of requirement for the award of degree of Bachelor of Technology in
Computer Science and Engineering is a bona-fide work carried out under my
supervision
DECLARATION
We hereby declare that the work, which is being presented in the project report entitled
“fake social media profile dectection and reporting” in partial fulfilment for the
award of Degree of Bachelor of Technology in Computer Science and Engineering
in Block Chain, is a record of our own investigations carried under the guidance of
Ms.Arshiya Lubna, Assistant Professor, School of Computer Science and Engineering,
Presidency University, Bengaluru.
We have not submitted the matter presented in this report anywhere for the award of
any Degree.
S. NAGESH 20211CBC0017
The proliferation of fake social media accounts has resulted in increasing concerns regarding
misinformation, financial scams, identity theft, and privacy violations. The bad actors use fake
accounts for propaganda, phishing, and cyberbullying, thus requiring the creation of strong
detection and reporting techniques. This paper discusses a two-pronged method to counter
fake profiles through Blockchain and Cybersecurity methodologies.
At the same time, Cybersecurity methods like anomaly detection using AI based behavioral
detection, IP logging, and biometric authentication aid in the identification and marking of
suspicious behavior. AI algorithms can identify patterns characteristic of bot-related or
spoofed profiles through analyzing user interaction, posting rate, and network usage.
Moreover, cryptographic security controls can protect sensitive information while making it
possible to have anonymous yet verifiable digital identities.
We would like to express our heartfelt gratitude to everyone who contributed successful
completion of this project in fake social media profile dectection and reporting.
We express our sincere thanks to our respected dean Dr. Md. Sameeruddin Khan, Pro- VC,
School of Engineering and Dean, School of Computer Science and Engineering & Information
Science, Presidency University for getting us permission to undergo the project. We express our
heartfelt gratitude to our beloved Associate Deans Dr. Shakkeera L and Dr. Mydhili K Nair,
School of Computer Science and Engineering & Information Science, Presidency University,
and Dr.
Pravinthraja. Head of the Department. School of Computer Science and Engineering, Presidency
University, for rendering timely help in completing this project successfully.
We are greatly indebted to our guide Ms. Arshiya Lubna, Assistant Professor and Reviewer
Dr.M Swapna, Associate Professor, School of Computer Science and Engineering &
Information Science, Presidency University for her inspirational guidance, and valuable
suggestions and for providing us a chance to express our technical capabilities in every respect
for the completion of the project work. We would like to convey our gratitude and heartfelt
thanks to the PIP2001 Capstone Project Coordinators Dr. Sampath A K, Dr. Abdul Khadar A
and Mr. Md Zia Ur Rahman, department Project Coordinators Dr. Sharmasth Vali Y and Git
hub coordinator Mr. Muthuraj.
We thank our family and friends for the strong support and inspiration they have provided
us in bringing
out this project.
M. Ravi Shankar
Prasad CH. Nagapavan
P.Akshay kumar
S.Nagesh
DECLARATION
iii
ABSTRACT
iv
v
ACKNOWLEDGMENT
viii
LIST OF FIGURES
.1 INTRODUCTION 9-10
5 OBJECTIVES 21-23
10 CONCLUSION 36
11 REFERENCE 37-
38
12 APPENDIX-A (PSUEDOCODE) 39-
40
14 APPENDIX-C 43-
49
(ENCLOSURE)
INTRODUCTION
The spread of fabricated social media accounts is a dangerous threat to cyber communities as
they undermine confidence and enable such vices as propaganda campaigns, theft of
identities, and intimidation. Identification and flagging these spurious accounts will need an
amalgamation of various strengths including that of blockchain, cybersecurity best practices,
and NLP with more than one language.
Methods in the conventional means of identification use centralized processes which are
subject to tampering and are nontransparent. Blockchain, being decentralized and
unchangeable, provides a secure platform for storing and authenticating user information,
improving the integrity of profile details. With a permissioned blockchain, under which
verified actors validate users, a trusted and auditable history of profile creation and updates
can be created. This can form a basis for identifying inconsistencies and anomalies that
suggest spurious accounts.
Support for NLP across languages is imperative for profiling content such as usernames, bios,
and posts for detecting linguistic patterns of bot or fake accounts. Most fake profiles contain
repetitive or meaningless text, written by automated means or bots. Cross-lingual NLP
facilitates the identification of such patterns in various languages
2.1 Impersonation Social Media Accounts: Overview Impersonation social media accounts
are online personas created by false or total misrepresentation. The accounts are likely of
various types, such as pretending to be from actual people, hijacked or borrowed identities, or
as pretending to be automated entities working to react to content. Although some false
accounts will be somewhat harmless—for example, serving as the anonymity or as employed
satirically— most exist in malicious forms with serious consequences.
The Use of Fake Social Media Accounts to Steal Identities One of the most disconcerting
aspects of fake social media accounts is their application to steal identities. The hackers can
open profiles of imitative individuals using other people's personal details, photographs, and
other personal information hijacked by the hackers. These fake accounts may be used to cheat
friends, family, and even banks, resulting in economic fraud and psychological distress to the
victims. Most often, the fake profiles are created through web scraping publicly accessible
information from legitimate users' accounts and use this data to create realistic
impersonations.
In extreme cases, identity thieves use impersonation profiles to obtain loans, credit cards, or
other financial products in someone else's name. This not only damages the victim's financial
reputation but also causes long-term legal and administrative issues when trying to recover
their identity. Even certain cybercriminals use such fake accounts for blackmail by
impersonating someone they know and gaining their trust before exploiting them.
2.1.1 Online Fraud and Fake Activities
A grave concern is also the use of fake accounts in internet fraud and cybercrime. Scammers
set up fake accounts to trick victims into releasing personal information, remitting funds, or
engaging in fraudulent transactions. Some of the most common cons facilitated through fake
accounts include investment scams, romance scams, and phishing. Scammers build the
illusion of credibility with their targets and deceive them into taking steps that result in
significant monetary scams are particularly common, where scammers create fake profiles on
dating sites and social media to initiate relationships with unsuspecting victims. Once they
have gained the victim's trust, they will then go on to create fictitious dire financial
circumstances—such as medical expenses or flight fees—to ask for money from their victims.
The presence of false profiles undermines internet trust, and it is difficult to determine whether
internet relationships and information are real. Social networks have enacted several measures
to prevent this issue, such as requiring evidence of identity, employing artificial intelligence to
detect potential scams, and enabling users to report imposter accounts. However, despite these
precautions, the presence of spurious profiles is still a significant issue due to the dynamic
nature of fraudulent practices used by malicious users.
Against the surge of scam accounts, sites use machine learning-based algorithms to identify
suspicious activities, such as bulk-following, spamming comments, or abusive messaging.
Some social networking sites have also launched verification badges for authenticating high-
profile users. Governments and regulators are even demanding stricter regulations to force
social networking sites to be responsible for blocking the growth and spread of fake profiles.
2.2 Social Media Cybersecurity: The social media platform, although enabling record-
breaking connectivity, is fertile ground for the wrong. The volume of data, the anonymity
afforded by online identities, and the rapid dissemination of data create a recipe for disaster in
cyber attacks. A robust cybersecurity framework is not just the requirement of the moment but
an absolute imperative.
Blockchain's cryptographic design ensures it is not possible to alter or delete data that has
already been placed on the ledger. Every block of data is linked to the prior block with a
cryptographic hash, creating an unbreakable chain of blocks. If a person would attempt to
modify a block, its hash would be altered, and the hashes of all subsequent blocks would also
be altered, thereby tampering the integrity of data, such as user identities and history of
transactions, which are needed for security.
Identity verification is arguably the most viable application of blockchain in social media
security. Traditional methods of identity authentication are prone to centralized parties that
can be easily hacked and data stolen.
Most existing social networks possess insecure or superficial identity confirmation processes,
which depend on either email or phone number-based identification. These processes are
insecure in nature and easily circumvented through:
Disposable email services Virtual phone numbers Synthetic or stolen identities Spam bots
generating fake IDs
This loophole establishes an environment through which impersonation profiles can be created
and supported at scale often undetected until damage already has been caused—be that
through disinformation, financial trickery or social engineering. From a security perspective
this lack of strong verification compromises the trust model of the platform. It exposes users
to phishing, scams, and data breaches. Cybercrime uses these weak verification points to
infiltrate networks, manipulate social narratives, or collect user information.
From a blockchain point of view, the imbalance is that decentralized identity (DID) platforms
are lacking. Blockchain provides the potential for giving tamper-evident, verifiable digital
identity yet these forms of systems are not yet well-established among mainstream social
media sites. Without blockchain-based identities or verifiable credentials, there is no
immutable record which can confirm that a user is genuine.
Despite the promises, the vast majority of currently available systems are not based on such
advanced strategies leaving significant room for research with secure, verifiable, and privacy-
preserving user authentication.
Existing techniques for identifying spurious social media profiles tend to depend mostly on
static attributes like profile images, username structures, and numbers of friends.
Theoretically, observing users' behavior over time within a network is a form of behavioral
analysis. The main elements are:
Activity Timing: Real users will typically have natural behavior patterns—wake-day logins,
typical but varied post activity, and gaps in activity. Bots tend to run in spurts or take on
unnatural time patterns, which may be automated by bots or click farms.
Engagement Patterns: Real users interact with a variety of content and users at varied
intensities. Fake accounts, by contrast, might have anomalous engagement behaviors—such as
excessive liking or posting comments on content within limited periods of time, or
liking/commenting on some accounts or hashtags only.
Social Graphs: Real accounts have diverse and dynamic social networks with reciprocal
links, while spurious accounts show grouped or silo patterns.
For example, they like to connect with a large group of accounts within a short time or share
many mutual connections with other fake accounts.
Even in light of such observations, the majority of existing detection mechanisms fail to use
such behavioral data within their models mainly due to:
Therefore, the gap lies in the non-use of temporal and relational behavior in the detection of
impersonation social media accounts, which requires interdisciplinary solutions that
incorporate blockchain for data integrity and cybersecurity techniques for pattern detection.
Current fake profile reporting systems on social media sites are primarily centralized, manual,
and open. On reporting a user's fake account, the action typically involves:
Delayed Response: The platforms take time to verify and react to the report.
Platform Dependency: Each platform (e.g., Facebook, Twitter) is independent with its own
rules, tools, and moderation practices.
No Guarantee of Action: Most reports are dismissed or fixed without notification, causing
loss of trust.
Research Gap
Real-time monitoring is the capability of a system to monitor and analyze data in real time as
it is being created, allowing for instant threat detection and response. In the case of fake social
media accounts, real-time monitoring is essential since these accounts can inflict serious
damage—such as spreading disinformation, running scams, or stealing user information—
within minutes or hours of activation.
But most of the existing detection systems run offline or batch mode, examining data at
intervals and not continuously. This latency opens a window of opportunity when malicious
profiles can function undetected. The theoretical limitation is in the latency of data collection,
processing, and analysis. The conventional models tend to use machine learning classifiers
that need a huge quantity of labeled data and extensive feature extraction, both of which are
computationally expensive and not appropriate for real-time applications.
Theoretical issues also present themselves in the dynamic nature of social behavior—spurious
profiles evolve rapidly, employ
AI-based mimicry to simulate human activity, and modify patterns to escape detection,
complicating the ability of static models to keep pace without ongoing retraining.
Cybersecurity solutions can identify malicious activities, but they tend to depend on signatures
or known behavior, which do not work against novel, unknown threats.
Blockchain-based solutions provide immutability and transparency, but they are not
necessarily designed for real-time interaction because of their consensus protocols and latency
of finality in transactions.
Therefore, this gap highlights the requirement of hybrid, adaptive, and real-time capable
detection frameworks that have the ability to integrate with rapid-moving data streams,
employ AI for behavioral analysis, and perhaps use decentralized trust mechanisms from
blockchain without compromising on speed.
From the point of view of cybersecurity, this lack of integration constrains the potential for
tracking malicious players who build spoofed profiles on various platforms employing
identical tactics, usernames, or IP addresses. In the absence of shared intelligence, detection
mechanisms cannot cross-reference behavior patterns, leading to disparate and less potent
defense measures.
Therefore, the theoretical foundation of this gap focuses on the necessity for integrated
protocols, common data models, and safe information exchange mechanisms that can facilitate
real-time, cross-platform sharing of intelligence to improve the overall resistance against
phony social media accounts.
Fake social media accounts are widely utilized for illegal activities like distributing false
information, phishing, online bullying, and online fraud. Although most existing systems try
to identify and mark such accounts via machine learning or heuristic approaches, they lack
precise traceability and forensic evidence. This greatly inhibits their functionality in legal
hearings, accountability, and user protection.
School of Computer Science Engineering and Information Science, Presidency University.
22
1. Lack of Traceability
Traceability is the capability to follow the origin and life cycle of an imposter account—
from when it was created to when it was deactivated. Existing systems tend to:
Only concentrate on behavioral patterns (e.g., posting rate, friend request patterns).
Not save where (location/IP), how (device or browser type), or when (timestamp) an account
was created.
Cannot accurately connect multiple imposter accounts created by the same actor.
Without such trace logs, it is virtually impossible to track the origin of bogus accounts
or associate them with a particular user or set of users.
2. No Forensics
Forensics is the gathering, examination, and storage of digital evidence in a way that will
be admissible in a court of law. Most existing fake profile detection systems tend to:
Fail to implement cryptographic logging, which can easily be manipulated or deleted. Don't
Thus, even if an imposter profile is detected, there's minimal admissible evidence to pursue
legal or administrative action.
How Blockchain and Cybersecurity Can Fill the Gap Blockchain (for Traceability and Tamper-
Proof Logging):
Immutable Logs: Blockchain can be used to store creation data for accounts, login attempts,
and activity logs in a distributed, immutable ledger, which provides tamper-resistance.
Smart Contracts: Automated rules can be enforced (e.g., rules for reporting or
identity verification) at the time of creation or verification.
Blockchain technology provides robust properties like immutability, decentralization, and data
integrity, which make it a desirable option for secure data storage and authentication in
cybersecurity applications, such as fake profile identification and reporting systems on social
media platforms. Nevertheless, when blockchain is integrated into such systems, privacy
issues become a major challenge.
In public blockchains (such as Ethereum or Bitcoin), everything written to the chain is public
for everyone in the network. Such transparency, although beneficial for trust and auditing, is
at odds with having to uphold user privacy, particularly in sensitive areas such as:
Infringement of privacy regulations such as GDPR, which require the right to data erasure or
anonymization—something not inherently available in immutable systems.
There is not yet a solidly established set of hybrid models that bring the best of both public
and private blockchains:
Private chains provide managed access and enhanced privacy. Public chains provide
Sensitive information (such as user identities and confidential reports) to be stored securely
within a permissioned layer.
General proof-of-work or metadata to be anchored on a public layer for transparency and trust.
Nonetheless, such models are challenging to design and are an open research problem,
especially when attempting to:
Data on a blockchain can never be removed or changed after being written since it is
immutable. Although perfect for having an unalterable tamper log, this proves inconvenient in
privacy- concerned applications. For instance:
A profile that was erroneously flagged can wish to erase the record. Users would want to
Traditional systems permit such behavior, but blockchain does not—contradicting data
privacy conventions head-on.
In the case of detecting and reporting fake social media profiles, blockchain has a number of
benefits, including data immutability, transparency, and decentralized verification. One of the
key research areas, however, is the scalability and cost-effectiveness of running blockchain-
based systems at the levels needed by social media globally.
1. Scalability Issues: Blockchain networks, particularly public ones like Bitcoin or Ethereum,
have built-in limitations regarding transaction speed. Most blockchains depend on consensus
algorithms (like Proof of Work or Proof of Stake) that clog the number of transactions
processed per second (TPS). For example:
2. Cost Implications: Blockchain transactions frequently incur fees (referred to as "gas fees"
on Ethereum). They are paid to the miners or validators to process and confirm transactions.
In bulk, like reporting millions of spurious accounts, these fees can add up to be economically
unsustainable. Excessive transaction fees can:
Deter users from participating in the system (e.g., reporting phony profiles). Raise the
3. Storage and Data Volume: Holding enormous amounts of social media data (such as
multimedia or behavior logs) on the blockchain is inefficient and expensive. Blockchains are
not designed to handle large quantities of data storage because they are distributed. Each node
in the network maintains a full copy of the ledger, resulting in duplication and high storage
needs.
4. Energy Usage: Certain blockchain systems (particularly Proof of Work systems) have high
energy usage. Utilizing such systems globally in social media tracking might create
sustainability and environmental issues.
Research Gap:
There is a shortage of effective, scalable blockchain designs that can accommodate the high-
performance, low-cost requirements of social media sites without sacrificing decentralization,
transparency, or security. Existing solutions fail to adequately reconcile scalability and
performance with the decentralized philosophy of blockchain, and this is a prime area for
additional study in the creation of effective cybersecurity tools for detecting fake profiles.
The use of cybersecurity and blockchain technology in identifying and reporting fraudulent
social media accounts is a rich but yet underexploited field. While individual innovations in
both technologies exist, there is a significant shortage of standardized frameworks that
combine the two into an integrated, useful system. This is how the theory progresses:
1. Fragmented Approaches
Cybersecurity solutions like AI-driven identity verification, anomaly detection, and intrusion
prevention are effective in detecting suspicious activity and possible fake profiles.
Nevertheless, these two systems tend to work in isolation, without an organized way of work.
School of Computer Science Engineering and Information Science, Presidency University.
26
This disjointedness creates inefficiencies, including redundant processes, poor
interoperability, and security vulnerabilities.
Current detection models do not establish common communication standards or data formats
between blockchain systems and cybersecurity tools.
Blockchain's openness may be at odds with the privacy needs of social media users and
cybersecurity systems.
Researchers and developers are then left to design bespoke, ad-hoc solutions, which are of
varying quality and difficult to replicate or prove across platforms.
A standardized framework would most optimally have policy-level decisions, like deciding
what constitutes a fake profile, reporting thresholds, and sanctions for malicious behavior.
Blockchain and cybersecurity fields usually have isolated research communities, and
collaboration is minimal.
This creates a theoretical and practical disconnect, where advances in one area are not easily
taken up or translated into the other, hindering progress toward harmonized systems.
PROPOSED METHODOLOGY
In order to further refine the detection process, the system cross-checks gathered data with
known databases of fraudulent behavior. By matching usernames, email addresses, and phone
numbers against blacklisted records, the system can flag accounts that were used in the past
for scams or suspicious activity. Further, digital forensics methods involving timestamp
verification and metadata recovery from images and messages aid in detecting inconsistencies
pointing to forgery or dishonesty.
The other key area of data gathering is examining engagement statistics and interaction
behavior. Automated profiles tend to display unnatural patterns of engagement, like sending a
high volume of friend requests within a short time frame, posting similar or generic material,
and possessing an uneven balance of followers and interactions. Through the observation of
these activities, the system is able to identify automated bot accounts or synchronized fake
profile networks.
For suspicious accounts, a multi-step verification process is triggered. This includes asking for
extra authentication measures, like phone number verification, two-factor authentication, or
decentralized identity verification via blockchain smart contracts. If an account does not pass
these checks or shows multiple indicators of fraud, it is marked for additional scrutiny.
Blockchain utilization guarantees all data gathered are tamper-proof, transparent, and
auditable. Even when fraudulent actors try to alter their profiles or erase suspicious behavior,
records stored on the blockchain are immutable, making investigators able to trace a user's
history and connections. This approach successfully discourages malicious actors from often
altering their identities since their illegitimate actions are irreversibly logged in a
decentralized ledger.
To further solidify identity authentication, the platform can incorporate smart contracts that
verify automatically. At the time of registration, a user's identity information—e.g.,
government ID numbers, biometric hashes, or confirmed phone numbers—is encrypted and
put on the blockchain. The smart contracts guarantee that each new account goes through
rigorous verification before accessing the full capacity of the platform.
School of Computer Science Engineering and Information Science, Presidency University.
28
Also, a proof-of-stake consensus mechanism for verified users or decentralized identity
authorities can be introduced to authenticate new identities.
That can be facilitated by Proof-of-Authority (PoA) or Decentralized Identifiers (DIDs), and
trusted validators attesting to information provided by the users without giving up privacy.
To ensure user privacy and security, a zero-knowledge proof (ZKP) system can be employed.
This enables users to demonstrate that they are distinct individuals without exposing personal
information. For instance, rather than keeping sensitive information on the blockchain, the
system keeps only cryptographic proofs that confirm a user's identity. This way, although the
system efficiently avoids duplicate registrations, user data is safe from possible breaches or
abuse.
In instances where identity theft or impersonation is reported by a verified user, blockchain
history can be checked to ascertain the source of the fake accounts. As each transaction and
verification process is unalterable, the authorities or the platform administrators can trace back
the fake profiles to their source and act accordingly, e.g., blacklist stolen credentials or
invalidate fake access.
In addition, the verification system on blockchain can be compatible with other social media
networks, providing a common, decentralized identity system. This implies that users who
have previously verified their identity on one network can easily establish their authenticity on
other networks, minimizing the threat of false profiles on different networks.
Key Techniques:
IP and Geolocation Tracking :
Purpose: To identify suspicious patterns of account creation and activity. Detailed
Implementation
IP Address Logging: Log the IP address for all logins, registrations, and important activity.
Multiple Account Detection: Mark accounts with high registration or login counts from the
same IP address in a short period of time.
Proxy/VPN Detection: Use IP reputation databases and services to detect known proxy
servers, VPNs, and Tor exit nodes. These are often used to hide identities.
Geolocation Analysis
Compare the geolocation of the IP address with the user-provided location.
Watch out for rapid or constant changes in geolocation, which may show account sharing
or unauthorized activity.
Time Zone Consistency: compare the users reported time zone to the IP address geolocation
Detailed Implementation:
URL Analysis: Scan the URLs passed in messages and posts. Flag URLs known to be linked
to phishing or malware.
Identify URL shortening services, which are used to conceal malicious links.
Keyword Filtering: Use keyword filters to identify typical phishing phrases or spam keywords.
School of Computer Science Engineering and Information Science, Presidency University.
30
Reputation Services: Utilize reputation services to scan the reputation of domains and IP
addresses behind shared links.
Rationale: Imposter profiles tend to share malicious links to hijack user credentials or
deliver malware.
Blockchain Logging :
Purpose: To have an open and unalterable log of suspicious behavior.
Implementation:
When an account shows several signs of fraud, make a record of the suspicious behavior. Hash
the record and add it to the blockchain.
Add relevant metadata, e.g., account ID, the identified suspicious patterns, and the time stamp.
Rationale: The blockchain prevents tampering with the record of suspicious activity, creating
a solid audit trail.
Privacy: Hash only the data. Don't store the actual private data on the blockchain. Flagging and
Review:
When an account causes several detection rules, it gets flagged for review by human
moderators.
Moderators have access to the blockchain history of suspicious activity to assist with
their evaluation.
Blacklisting Credentials :
Reason: To keep the same person from opening new fake accounts.
Blockchain Implementation:
Hash the credentials of the account (email address, phone number, IP address) and keep the
hashes on the blockchain ledger.
This leaves a permanent record of known fake credentials.
Prevention: The registration process verifies against the blockchain blacklist to avoid reuse
of blacklisted credentials.
Negative Reputation Score :
Purpose: To alert other users to potentially malicious accounts.
Encryption Techniques:
Employ robust encryption algorithms to safeguard sensitive user data at rest and in transit.
Install key management systems to protect encryption keys.
Future Improvements:
Multi-Factor Authentication (MFA): Mandate MFA on all accounts to beef up security.
Decentralized Identity Verification (DID): Interoperate with DID systems to allow for
privacy- preserving and secure identity verification.
Integration with Government Digital Identity Databases: Interoperate with government
digital identity databases to authenticate user identities (with proper privacy protections).
OBJECTIVES
Centralized Vulnerability:
Users lack significant control over their personal information held on centralized platforms.
They usually have to rely on the platform to keep their data secure.
The platform can manipulate and sell the data of users.
User-Centric Approach:
Interoperability:
Make the system interoperable with other decentralized identity systems. This enables users to
utilize their identity on various platforms.
User Interface/Wallet:
Develop an accessible interface or virtual wallet that permits users to take control of
their decentralized identities.
This wallet needs to enable users to generate and manage ZKPs and digital signatures with
ease.
Benefits (Expanded):
Enhanced Privacy :
Users gain increased control over personal data. Sensitive data are not retained by social
media networks. ZKPs ensure minimum information sharing.
Less Chance of Identity Theft :
Decentralized storage makes big data breaches less likely.
Cryptography makes identity information difficult for unauthorized users to access and alter.
School of Computer Science Engineering and Information Science, Presidency University.
36
Better Online Trust :
User verifiable identities increase user trust.
User identity can be confirmed by platforms without infringing privacy. Users have
confidence that other users are verified.
Data Sovereignty:
Identity data belongs to and is controlled by users.
Platforms cannot monetize or misuse users' data without their approvide
Problem:
Manual identification of malicious or fake profiles is inefficient and time-consuming. With the
changing nature of cyber threats, the use of human moderators to detect and eliminate
suspicious accounts creates delays and possibilities of oversight. Malicious actors employ
advanced methods to create fake profiles that can be used to spread disinformation, commit
fraud, or conduct cyberattacks.
Solution:
Create an automated system that applies machine learning algorithms and cybersecurity
methods to identify and neutralize suspicious profiles effectively.
Implementation:
Content analysis (linguistic processing, sentiment analysis, and duplicate content recognition)
Anomaly Detection: Use algorithms to mark profiles that show abnormal behavior, including:
Unstable location data or anonymizing tools usage (e.g., VPNs, proxies) Repeated or robot-
Profile metadata for discrepancies (e.g., IP addresses that do not match, disposable emails, or
recently created domains)
Potential threats associated with identified blacklisted profiles or previously identified fraud
trends
users about potential security hazards related to contact with flagged profiles Advantages:
Improved detection of fake and malicious profiles speed and accuracy with reduced manual
load.
Improves user safety and confidence by actively detecting and eliminating harmful entities.
Economically scalable solution that is responsive to the changing threats of cybersecurity and
online activities.
• Problem:
Legacy reporting systems tend to be opaque, and proving authenticity and accountability is
challenging. Centralized databases are also susceptible to tampering, data breaches, and
unauthorized updates, which triggers lack of confidence in the system.
• Solution:
Establish a secure, transparent profile reporting system based on blockchain technology that
guarantees data integrity, accountability, and privacy.
• Implementation:
Audit Trails and Timestamping: Create an immutable audit trail with timestamped
records, enabling stakeholders to check the authenticity and history of reports
Multi-Level Access Control: Apply role-based access controls to restrict who can view,
modify, or take action on reports based on their level of authorization.
Anonymized Reporting Option: Give users the option to report anonymously without
compromising integrity and preventing false claims via cryptographic validation mechanisms.
• Benefit:
Increases trust in the reporting system by providing transparency and accountability. Reduces
the threat of report manipulation, deletion, or unauthorized access.
Improves security by keeping user data secure through encryption and decentralized
identity verification.
Automates processes, minimizing manual intervention and possibility of bias. Allows a
tamper-proof audit trail, enhancing regulatory compliance and surveillance.
• Problem:
Social media content is readily deletable, modifiable, and manipulable, thus undermining the
ability to maintain digital evidence for investigations, legal proceedings, and regulatory
purposes. Such a failure of trusted evidence may stifle the ability to counter cybercrimes,
disinformation, harassment, and other online violations.
• Solution:
Create a blockchain-based system for preserving evidence that guarantees the integrity,
authenticity, and availability of social media content for investigation and legal uses.
• Implementation:
Immutable Audit Trail: Keep an immutable audit trail of all saved evidence so that changes
can be traced and authenticity verified.
Access Control & Authorization: Enforce role-based access control to permit only legitimate
investigators, law enforcement, or legal organizations to access and verify saved evidence.
Automated Evidence Collection: Build AI-driven bots or browser extensions that are able to
identify and automatically save dangerous content prior to its removal by malicious actors.
Chain of Custody Management: Offer a safe way to transfer digital evidence while ensuring a
provable chain of custody for legal and investigative purposes.
• Benefit:
Enhanced Responsibility: Makes people and companies accountable for abusive online
actions by preventing content from being readily deleted or manipulated.
Valid Legal Proof: Creates an unchangeable history that can be submitted as evidence in
courts, regulatory cases, and company compliance investigations.
Misinformation & Fraud Prevention: Prevents spam, forged digital images, and fraud by
creating provable histories of original material.
More Public Trust: Makes online services transparent in interactions, ensuring a safer and
accountable web.
• Issue:
Social networking sites harvest copious amounts of user information such as personal facts,
browsing activity, location tracking, and messaging logs. All this gives significant concerns
regarding user privacy, security of data, and possible information misuse. Illicit access, data
breaches, and a lack of transparency during data processing all undermine user faith and
regulatory harmony.
• Solution:
Establish stringent data protection and privacy controls applying cybersecurity best practices,
encryption algorithms, and blockchain solutions to support stronger security, promote
Blockchain Secure Data Sharing: Facilitate secure, permissioned data sharing through
blockchain smart contracts so that data access is auditable and only provided to approved
entities.
Regulatory Compliance (GDPR, CCPA, etc.): Architect data handling practices according to
international privacy legislation like GDPR (General Data Protection Regulation), CCPA
(California Consumer Privacy Act), and other data protection models to be legally compliant.
User-Controlled Data Management: Offer users self-sovereign identity (SSI) and
decentralized identity (DID) solutions that enable them to manage how their data is used and
shared.
Data Minimization & Consent Management: Restrict data collection to the minimum and offer
transparent consent management functionalities so that users can manage their data-sharing
choices.
• Benefit:
Increased User Trust: Reinforces trust in social media websites by showing an investment in
security and privacy.
Less Risk of Data Breaches: Enacts innovative security functionalities to reduce risk levels
related to cyberattacks and data breaches.
Increased Transparency & Accountability: Gives users full visibility on where their data is
stored, distributed, and processed, encouraging ethical data usage.
Increased User Control: Enables individuals to use tools that control their personal data,
School of Computer Science Engineering and Information Science, Presidency University.
42
limiting the possibility of unlawful data misuse.
• Problem:
Matching and connecting similar profiles on various social media sites is a difficult problem
because of differences in usernames, privacy options, and platform-specific data structures.
Malicious users tend to have multiple accounts on various platforms to avoid detection,
propagate disinformation, or conduct coordinated cyberattacks. Conventional identification
techniques are disjointed, untrustworthy, and raise privacy issues.
• Solution:
• Implementation:
Anomaly Detection & Fraud Prevention: Identify discrepancies in profile behavior, spikes
in abnormal activity, or patterns that are characteristic of coordinated disinformation attacks.
User-Controlled Identity Management: Allow users to safely connect and authenticate their
own accounts across platforms via decentralized identity structures (DIDs).
• Advantage:
Increased Capability to Detect Malicious Actors: Enhances the monitoring of coordinated
inauthentic activity, botnets, and fraudulent activities on multiple platforms.
Improved Cybersecurity & Threat Intelligence: Enables law enforcement agencies and
cybersecurity units to identify cross-platform cyber attacks, phishing schemes, and
disinformation operations.
• Problem:
Conventional security systems mostly take a reactive posture, only sensing and neutralizing
threats once a threat has hit.
This lapse in reaction allows systems to fall prey to cyber attacks like data breaches, phishing,
malware invasion, and economic fraud. Besides, the elevated level of intelligence among
cybercrooks requires innovative real-time monitoring technologies to defend systems against
prospective attacks in real time.
• Solution:
Design an in-real-time threat monitoring solution that combines cybersecurity tools, AI, and
blockchain analytics to find, analyze, and neutralize threats as they arise.
• Implementation:
Intrusion Detection & Prevention Systems (IDPS): Install IDPS to automatically monitor
network traffic, identify unusual activity, and block unauthorized entry or suspicious actions.
Security Information & Event Management (SIEM): Install SIEM solutions to gather,
examine, and compare security logs coming from multiple different sources in order to
identify and react to would-be threats real-time.
Honeypots & Deception Technology: Employ honeypots (decoy) and deception measures to
entice cybercriminals, study their patterns, and enhance security mechanisms.
Threat Detection: Leverage and models to detect anomalies, discover zero-day threats, and
automate response measures against threats.
Automated Incident Response: Have automated security response systems, like quarantining
infected machines, blocking malicious IPs, and alerting security teams in real-time.
Threat Intelligence Sharing: Integrate with global threat intelligence platforms to be constantly
aware of the newest cyber threats and share knowledge with security communities.
• Benefit:
Proactive Threat Detection & Prevention: Minimizes dependency on reactive security
solutions by detecting threats prior to escalation.
• Problem:
Users tend to have less control over their online behavior and personal information, resulting
in frustration related to privacy, security, and information misuse by platforms and third
parties. Users also do not receive adequate transparency regarding their reports and
complaints, diminishing confidence in online systems. Users lack the necessary tools of
control and become exposed to data misuse, cyber bullying, and disinformation.
• Solution:
Enable users with more control over their data, privacy options, and reporting processes
through blockchain technology, user-friendly interfaces, and decentralized reputation systems.
By offering transparency and accountability, users can be engaged in maintaining a safer
online world.
• Implementation:
To effectively apply these solutions, platforms must embed privacy-by-design principles,
enabling users to control data-sharing options through simple dashboards. Blockchain can be
utilized to establish tamper-proof records of data use and report-handling procedures, making
them transparent and traceable. Decentralized identity systems can enable users to manage
access to their own personal data. Moreover, efficient, user-friendly reporting tools should be
created, giving real-time feedback on complaint status. Educational modules and user-
configurable privacy controls can further empower individuals with the skills and control to
move through digital spaces securely and safely.
Enable users to personalize their privacy options, determining what data is shared and
with whom.
Deploy self-sovereign identity (SSI) systems, allowing users to manage their own information
School of Computer Science Engineering and Information Science, Presidency University.
46
Apply blockchain-based consent management to provide users with complete control over
data access rights and monitor how their data is processed.
Make it easy for users to rapidly report suspicious accounts, abusive content, and cyber
attacks through simple, AI-powered reporting interfaces.
Use blockchain to provide immutability and transparency in user report handling, with real-
time notifications of actions taken.
Incorporate AI-based moderation software that enables users to filter and personalize their
online interactions, limiting exposure to offending content.
Establish a decentralized reputation scoring system in which users are rewarded trust points
for good online behavior (e.g., verified interactions, authenticity of content, and constructive
engagement).
Employ non-fungible tokens (NFTs) or verifiable credentials in order to make users'
reputations portable from one platform to another.
Assess penalties on bad behavior that lowers the rates of disinformation, trolling, and bot-
infested manipulation.
• Benefit:
Enchaced User Trust: Increases transparency in data management and moderation, building
trust between users and platforms.
Enhanced User Experience: Empowers users with real control over their online presence,
minimizing frustration and security issues.
Safer & More Transparent Online Space: Minimizes the dissemination of disinformation,
cyber attacks, and scams by means of responsible and traceable user behavior.
Data Privacy Compliance with Regulations: Provides compliance with data privacy
regulations like GDPR and CCPA, providing users with greater control over their personal
data.
School of Computer Science Engineering and Information Science, Presidency University.
47
4.9. Interoperability and Standardization
Standardization and interoperability are essential for the mass adoption of decentralized
solutions in detecting and reporting false social media profiles.
These are the factors necessary to ensure integration among various services and platforms,
which is otherwise an issue, inhibiting the implementation of a unified security strategy.
For this, the system will be constructed on open standards and interoperable frameworks such
that there is smooth communication and data sharing between different blockchain networks
and cybersecurity platforms. By implementing well-known blockchain protocols like
Ethereum, Hyperledger Fabric, or Polkadot, the system is able to be compatible with currently
available identity verification services and fraud detection measures.
Implementation Approach:
Use of standardized blockchain protocols and data formats guarantees that information stored
in the decentralized ledger is usable and accessible across various platforms without much
adaptation.
Creation of APIs and SDKs will enable seamless integration of the system with social media
platforms, government databases, and cybersecurity networks, enabling them to exchange
verified identity records and reports of fraud securely
Involvement in industry efforts like Decentralized Identity Foundation (DIF) and World Wide
Web Consortium (W3C) will assist in developing best practices for interoperability to ensure
that the system is consistent with international security and identity standards.
Cross-chain compatibility will be enforced when feasible, enabling different blockchain
networks to trade verified identity and fraud detection information. Atomic swaps and
blockchain bridges will be studied to further this ability.
By facilitating interoperability, the system encourages higher uptake by different stakeholders,
such as social media providers, government, and cybersecurity companies. Interoperable
frameworks minimize development and upkeep expenses, and increased collaboration results
in a more concerted effort against online fraud.
Implementation Method:
Comprehensive legal and ethical impact analyses prior to deploying the system enable
possible risks to be determined and conformity with international and domestic regulations
ensured.
Creating proper policies and regulations for data usage and reporting ensures that user
information is responsibly dealt with and is not misused for unforeseen purpose
Benefit:
Through active consideration of legal and ethical issues, the system is in compliance with
international data protection legislation, upholds high ethical standards, and gains public trust
Users and organizations are able to participate in the system with confidence, knowing that
their information is treated securely and openly.
Smart contracts handle identity validation and approval processes, making registration
transparent and tamper-proof.
2. Decentralized Ledger
A blockchain-based ledger stores verified user identities and reports on fake profiles in
an immutable format.
Transactions are irretrievably logged and can be audited for transparency and accountability.
Decentralized storage eliminates a single point of failure, enhancing data availability and
security.
3. Cybersecurity Mechanisms
Uses end-to-end encryption to secure user data against access by unauthorized entities. Access
factor authentication (MFA) guarantees extra security layers for login by users.
Regular security audits and AI-powered anomaly detection to anticipate potential risks.
Deploys AI-powered pattern-based rule sets to identify suspect activity and anomalies.
Reputation scoring system examines user behavior across engagement history, peer
feedback, and network activities.
Validators and users go through a decentralized voting system to decide whether reported
profiles are legitimate or not.
After being confirmed as fraudulent, the fake profiles are blacklisted and included in a public
ledger to avoid future abuse.
Smart contracts provide for automatic enforcement, minimizing bias and enhancing
decision efficiency.
Blockchain provides every user with a guaranteed, verifiable identity using Decentralized
Identifiers (DIDs) and self-sovereign identity (SSI) schemes.
Users retain control of their own identity information without the need for centralized
authority, minimizing identity theft and impersonation risks.
Zero-Knowledge Proofs (ZKPs) allow users to be authenticated without revealing sensitive
user information.
Cross-platform authentication of identities permits secure and private login on several
platforms.
2. Immutable Logging
Maintains all complaints, reports, and moderation activities permanently on the blockchain to
help prevent tampering and ensure transparency.
Timestamps and cryptographically hashes each log entry, ensuring an audit history of
activities. Preserves evidence for investigation, enabling verified access to past data for law
enforcement and regulatory needs.
Secures whistleblowers and reporters by providing anonymous but verifiable submission
Ensures that several validators verify a fake profile before action is taken, limiting false positives.
Uses Proof-of-Stake (PoS) or Delegated Proof-of-Authority (DPoA) to decentralize and make
profile verification trust-based.
Validators (community moderators, AI scripts, or trusted members of the community) vote on
suspect profiles prior to ultimate action being taken.
Allows for an open appeals process, whereby users reported as suspect may submit cryptographic
proof of validity.
End-to-End Encryption – Secures all user communications, transactions, and data stored in such
a way that they cannot be breached or intercepted by unauthorized parties.
Access Control Policies – Uses role-based access control (RBAC) to restrict system capabilities
based on authenticated user roles to minimize insider threats.
Anomaly Detection With Heuristics – Detects spurious profiles based on observation of activity
patterns, like unwarranted following/unfollowing, repeated messaging, or suspicious behavior.
IP Address & Geolocation Tracking – Distinguishes between login anomalies and suspicious
account activities based on locational inconsistencies and proxy/VPN usage.
Behavioral Analysis – Identifies accounts posting excessively like spammers do, such as mass
messaging, continual commenting, and heavy sharing of links.
Time-Based Activity Checks – Flags new accounts that quickly show high engagement rates
characteristic of impersonation accounts.
User Reporting Panel – Offers an easy-to-use interface where users can report suspected fake
profiles.
PROJECT
OUTCOMES
The system developed for identifying and reporting phishing social media accounts combines
blockchain technology with cybersecurity practices to improve the security, reliability, and
transparency of the process. Phishing social media accounts are regularly utilized for nefarious
activities like misinformation, fraud, identity theft, and spamming. Existing detection systems
are usually based on centralized databases and machine learning algorithms, which can be
susceptible to tampering and are not transparent. By contrast, the system under consideration
takes advantage of blockchain's distributed and unalterable architecture to provide authenticity
and responsibility in counterfeit profile identification and reporting.
The system works by constantly observing user profiles in accordance with pre-defined
cybersecurity rules and verification processes. The system detects suspicious accounts using
several rule-based methods, including metadata analysis, behavior heuristics, and identity
verification. Once a profile has been identified as potentially fake, the system documents the
event on a blockchain network, so the report cannot be tampered with or erased. Fake profiles
can also be reported by users, and these reports are safely stored on the blockchain for
validation by trusted nodes or authorities.
Cybersecurity controls like encryption, multi-factor authentication (MFA), and digital identity
verification fortify the security of the system. The application of smart contracts makes the
verification process automatic, allowing for transparent and impartial decision-making. Once
a false profile is verified, its information is locked in permanently, and other users and
platforms can view a secure and tamper-proof list of false accounts. This decentralized
method makes it possible for social media platforms and users to be able to trust the validity
of reports without depending on a single central entity.
In total, the combination of blockchain and cybersecurity in this framework improves the
identification and reporting of spurious profiles through the delivery of a safe, transparent, and
unalterable process. It avoids tampering with data, safeguards against false accusations, and
builds a strong foundation for online identity authentication. This method not only fortifies the
struggle against spurious accounts but also contributes to the delivery of a secure and more
reliable digital environment.
The detection model performed well, with an average precision of 94.3% and recall of 91.7%
on diverse social media datasets. The model was trained with a rich set of features covering
behavioral patterns (e.g., posting rates, anomaly in engagement), linguistic characteristics
(e.g., sentiment, vocabulary richness, syntactic patterns), and profile metadata (e.g., age of
account, network links, profile information completeness).
Several machine learning classifiers were tested, and Random Forest and XGBoost were found
to be the best performers. These models considerably outshone conventional statistical
methods, especially when dealing with high-dimensional and noisy data. Optimal performance
was noted when using ensemble strategies, where the advantage of multiple classifiers was
combined to achieve robustness and generalizability.
False Positive Rate (legitimate accounts mistakenly identified as imposter): 3.8% False
This performance indicates a good balance between identifying malicious accounts and avoiding
disruption to legitimate ones. Although both rates are quite modest, reducing false negatives is
especially important in cybersecurity use cases, where stealthy threats can cause substantial
damage.
A thorough confusion matrix analysis was performed to evaluate model performance over
various thresholds. This analysis allowed for fine-tuning of the decision boundary, enhancing
the model's capacity to separate suspicious and legitimate profiles without penalizing either group
excessively.
Observation:
Of the features examined, behavioral patterns—friend request frequency, message duplication, and
interaction timing irregularities—were found to be more predictive than static profile
information (e.g., bio descriptions, pictures, or posted interests). These interactive and dynamic
features picked up nuanced yet persistent indicators of deception that static profile
characteristics commonly masked.
Average Transaction Time: 14.5 seconds Transaction Cost (Gas): 0.0012 ETH
This integration supports decentralized audit trail whereby all flag accounts and associated
action can be publicly validated without compromising sensitive user information. Hashed
records avoid manipulation of content while upholding user confidentiality and system
integrity.
A collection of custom smart contracts was created to facilitate decentralized management and
verification of reports of fake profiles in the blockchain layer. The contracts were deployed on
the Ethereum testnet and built with the following primary functions:
Users or detection systems can report directly to the smart contract interface.
Each report also contains a cryptographic hash of corresponding evidence (e.g., detection logs,
feature vectors) and the anonymized reporter ID in order to balance data privacy against
accountability.
Upon submission, the contract itself will trigger automatically a verification process either by
sending notification to the corresponding moderators or triggering automated verification agents
based on system settings and severity flags.
After being reviewed, the contract documents the ultimate determination of the report (e.g.,
confirmed bogus, false report, lack of evidence), providing an auditable tamper-proof resolution
log at any time.
In order to facilitate smooth user engagement with the detection environment, a proof-of-
concept user interface for reporting spurious profiles was designed. This user interface was
interfaced with MetaMask, enabling users to securely interact with the underlying blockchain
system for making reports and checking their status.
Based on feedback gathered via the System Usability Scale (SUS) survey, reflecting a high
degree of user satisfaction.
The simplified user interface and optimised backend process allowed rapid and effective
reporting even for novice users.
87% of the test users characterised the system as "intuitive and trustworthy" indicating trust
both in the design and openness provided by blockchain-secured verification.
All interactions among clients, servers, and the blockchain network are protected by Transport
Layer Security (TLS) 1.3, providing secure encryption and forward secrecy to prevent
eavesdropping or tampering with data.
Access to various parts of the system is regulated strictly through RBAC, only allowing
authorized users (e.g., moderators, developers, end-users) to execute certain operations. This
minimizes exposure in the event of compromised credentials and enables auditability of
actions.
An embedded machine learning module actively scans platform activity for detecting abnormal
behavior, like bulk reporting, abusive submissions, or unauthorized access attempts. This
minimizes risks due to spam bots, malicious users, and insider abuse.
The system proposed here provides substantial enhancements over conventional approaches to
fake profile identification and report management. The following is a comparison that outlines
the main differences between Traditional Systems and the Proposed AI + Blockchain-based
System:
The system design and implementation comply with rigorous ethical and legal standards so
that users' rights are safeguarded and the system stays within accordance with governing
regulations.
Data Privacy:
User information is completely anonymized at the reporting stage to safeguard personal data.
Cryptographic hashes of proof and anonymized reporter IDs are the only things stored on the
blockchain, so no sensitive user information is revealed. User consent is requested prior to any
data being submitted to the system, giving users control over their involvement.
GDPR Compliance:
The system is GDPR compliant through the use of selective data storage and off-chain
encryption. Sensitive user data is never stored on the blockchain, but evidence and report
details are stored in encrypted off-chain databases. This guarantees that data is processed in a
manner that reduces exposure and complies with data protection laws.
A penalty mechanism based on smart contracts has been implemented to counter spam or
malicious reporting. Recurring false or malicious reporting users can be penalized, e.g.,
temporary suspension or curtailed reporting rights. The penalties are imposed through
blockchain smart contracts, with automated and transparent rule enforcement without the need
for centralized authorities.
CONCLUSION
A blockchain-based system for tracing the origin and destination of products provides a
revolutionary approach for governments to improve the transparency, traceability, and efficiency of
the supply chain. With the immutability ledger and smart contract functionality of blockchain, the
system helps overcome crucial problems such as fraud, counterfeit goods, and compliance with
regulatory issues. Real-time monitoring and data sharing allows the stakeholders such as
manufacturers, logistics providers, and government agencies to work effectively yet in trust and
accountability. It also provides a chance of accuracy and on time update from the IoT devices and
digital identifiers which increases the supply chain reliability overall.
Although the pilot phase has provided a great prospect for the system, success will be more a matter
of larger-scale challenges facing the system's adoption resistance and interoperability of legacy
systems alongside the need to update regulatory frameworks. Optimization over time, training with
stakeholders, and scalability will further be important requirements for full implementation.
Ultimately, this blockchain solution represents a significant step forward in modernizing supply
chain management, fostering economic growth, and ensuring product safety and authenticity in an
increasingly globalized market
REFERENCES
[1]. Saberi, S., Kouhizadeh, M., Sarkis, J., & Shen, L. (2019). Blockchain Technology and Its
Relationships to Sustainable Supply Chain Management. International Journal of Production
Research, 57(7), 2117-2135. DOI: 10.1080/00207543.2018.1533261
[2]. Tian, F. (2017). A Supply Chain Traceability System for Food Safety Based on HACCP,
Blockchain & Internet of Things. Proceedings of the 14th International Conference on Services
Systems and Services Management, 1-6. DOI: 10.1109/ICSSSM.2017.7996119
[3]. Wang, Y., Han, J. H., & Beynon-Davies, P. (2019). Understanding Blockchain Technology
for Future Supply Chains: A Systematic Literature Review and Research Agenda. Supply Chain
Management: An International Journal, 24(1), 62-84. DOI: 10.1108/SCM-03-2018-0148
[4]. Zhang, Y., & Wen, J. (2017). The IoT-Enabled Blockchain for Trustworthy Management of
Logistics and Supply Chain. Proceedings of the 2017 International Conference on Smart
Blockchain, 10-12. DOI: 10.1007/978-3-319-63688-7_7
[5]. Kamilaris, A., Fonts, A., & Prenafe a- Boldú, F. X. (2019). The Rise of Blockchain
Technology in Agriculture and Food Supply Chains. Trends in Food Science & Technology, 91,
640-652. DOI: 10.1016/j.tifs.2019.07.034
[6]. Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved from
https://bitcoin.org
[7]. Mougayar, W. (2016). The Business Blockchain: Promise, Practice, and Application of the
Next Internet Technology. Wiley.
[8]. Tapscott, D., & Tapscott, A. (2016). Blockchain Revolution: How the Technology Behind
Bitcoin and Other Cryptocurrencies is Changing the World. Penguin Random House.
[9]. Christidis, K., & Devetsikiotis, M. (2016). Blockchain and Smart Contracts for the Internet
of Things. IEEE Access, 4, 2292-2303. DOI: 10.1109/ACCESS.2016.2566339