Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
48 views3 pages

Operating Systems Phase 2

The document discusses the history of distributed computing. It began in the 1960s with concurrent processes communicating through message passing in operating systems. Early large-scale distributed systems included ARPANET and Usenet. The field of distributed computing emerged as its own area of study in the late 1970s and early 1980s. Beowulf clustering was introduced in 1993, allowing smaller, less expensive computers to be clustered together to gain the computing power of supercomputers at a lower cost. Early real-time access was through low-level socket programming, but over time protocols like NFS, FTP, and middleware abstracted this complexity. RPCs further advanced distributed applications by enabling remote functions to run as if locally.

Uploaded by

danibutt15
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views3 pages

Operating Systems Phase 2

The document discusses the history of distributed computing. It began in the 1960s with concurrent processes communicating through message passing in operating systems. Early large-scale distributed systems included ARPANET and Usenet. The field of distributed computing emerged as its own area of study in the late 1970s and early 1980s. Beowulf clustering was introduced in 1993, allowing smaller, less expensive computers to be clustered together to gain the computing power of supercomputers at a lower cost. Early real-time access was through low-level socket programming, but over time protocols like NFS, FTP, and middleware abstracted this complexity. RPCs further advanced distributed applications by enabling remote functions to run as if locally.

Uploaded by

danibutt15
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

OPERATING SYSTEMS

Semester Project:

Game-Theoretic Approach To The Distributed Computing PHASE II


Submitted By:

Khawar Hanif
FA09-BS(TN)-021

History of Distributed Computing:


The concurrent processes that communicate by message passing was started in operating systems architectures in the 1960s. The first large spreaded distributed systems were local area networks such as Ethernet. ARPANET, the predecessor of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET, and its successor, the Internet, other early worldwide computer networks included Usenet and FidoNet from 1980s, both of which were used to support distributed discussion systems. The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its European counterpart International Symposium on Distributed Computing (DISC) was first held in 1985. During earliest days of computing any task that is very large or cannot be done on small number of resources was given up to the government or large companies. Only these a handful of companies or government could afford to buy supercomputers and the infrastructure needed to support them. The price of personal computing declining rapidly in price and the supercomputers were still very expensive and to overcome the expenses an alternative was needed. So, in 1993, Donald Becker and Thomas Sterling introduced Beowulf clustering. This was not the first example of clustering, but this was the first time that someone has made something that enables anyone to take off the shelf computers and made a cluster of computers to overcome the importance of supercomputers. The clustering is used for reason that a block of smaller computers is made so that it can get the power to calculate a larger amount of data that is impossible for the smaller computers to compute individually. This can also low the cost of the computing as smaller computers at different places doesnt cost so much as the supercomputer can cost to the smaller companies that want to do some big tasks but they dont have the resources to get the supercomputers. All the computers of the cluster are connected to the network and work as these are side by side.

Initially, real-time access was accomplished via low-level socket communications. Usually written in assembly language or C, socket programming was complex and required a deep understanding of the underlying network protocols. Over time, protocols such as Network File System (NFS) and File Transfer Protocol (FTP) came on the scene that abstracted out the complexity of sockets. Companies such as TIBCO emerged that developed "middleware" software explicitly designed to facilitate messaging and communications between servers. Eventually, the ability to create distributed applications became feasible through the development of remote procedure calls (RPCs). RPCs enabled discrete functions to be performed by remote computers as though they were running locally. As Sun Microsystems' slogan puts it, "The Network is the Computer."

You might also like