A Prologue to Distributed systems

0
0
1844 days ago, 700 views
PowerPoint PPT Presentation
Anh Le Tuong Nguyen. 2. Oct. 4. Diagram. Outline of P2PClassification of P2P Unstructured P2P systemsNapster (Centralized)Gnutella (Distributed)Kazaa/Fasttrack (Super-node)Structured P2P frameworks (DHTs): ChordYAPPERS (hybrid)Conclusions. Anh Le Tuong Nguyen. 3. Oct. 4. What is P2P frameworks?.

Presentation Transcript

Slide 1

An Introduction to Peer-to-Peer systems Presentation for CSE620:Advanced organizing Anh Le Nov. 4

Slide 2

Outline Overview of P2P Classification of P2P Unstructured P2P frameworks Napster (Centralized) Gnutella (Distributed) Kazaa/Fasttrack (Super-hub) Structured P2P frameworks (DHTs): Chord YAPPERS (crossover) Conclusions Anh Le + Tuong Nguyen

Slide 3

What is P2P frameworks? Mud Shirkey: P2P alludes to applications that exploit assets (stockpiling, cycles, content, human nearness) accessible at the edges of the web The "litmus test:" Does it consider variable availability and brief system addresses? Does it give the hubs at the edges of the system critical self-rule? P2P Working Group ( A Standardization Effort ) : P2P registering is: The sharing of PC assets and administrations by direct trade between frameworks. Shared processing exploits existing figuring power and systems administration availability, permitting prudent customers to influence their aggregate energy to profit the whole venture . Anh Le + Tuong Nguyen

Slide 4

What is P2P frameworks? Numerous locales (at edge) Distributed assets Sites are self-sufficient (distinctive proprietors) Sites are both customers and servers ("servent") Sites have parallel usefulness Anh Le + Tuong Nguyen

Slide 5

P2P benefits Efficient utilization of assets Scalability: Consumers of assets likewise give assets Aggregate assets become actually with use Reliability Replicas Geographic dispersion No single purpose of disappointment Ease of organization Nodes self arrange No compelling reason to convey servers to fulfill request Built-in adaptation to internal failure, replication, and load adjusting Anh Le + Tuong Nguyen

Slide 6

Napster was utilized basically for document sharing NOT an unadulterated P2P network=> mixture framework Ways of activity: Client sends server the question, server ask everybody and reacts to customer Client gets rundown of customers from server All Clients send ID's of the information they hold to the server and when customer requests information, server reacts with particular locations peer downloads specifically from different peer(s) Anh Le + Tuong Nguyen

Slide 7

Napster Further administrations: Chat program, texting administration, following project,… Centralized framework Single purpose of disappointment => restricted adaptation to non-critical failure Limited adaptability (server ranches with load adjusting) Query is quick and upper destined for length can be given Anh Le + Tuong Nguyen

Slide 8

Napster 5 4 6 focal DB 3. Download Request 2. Reaction 1. Inquiry 4. Record 1 2 Peer Anh Le + Tuong Nguyen

Slide 9

Gnutella immaculate distributed exceptionally basic convention no steering "intelligence" Constrained communicate Life-time of bundles restricted by TTL (regularly set to 7) Packets have special ids to distinguish circles Anh Le + Tuong Nguyen

Slide 10

Gnutella - PING/PONG 3 6 Ping 1 Ping 1 Pong 3 Pong 6 Pong 6,7,8 Pong 6,7,8 Ping 1 7 Pong 3,4,5 Pong 5 1 2 Pong 7 Ping 1 Ping 1 Ping 1 Pong 2 Known Hosts: 2 Pong 8 Pong 4 8 Ping 1 3,4,5 6,7,8 Query/Response practically equivalent to 4 Anh Le + Tuong Nguyen

Slide 11

Free riding File sharing systems depend on clients sharing information Two sorts of free riding Downloading yet not sharing any information Not sharing any intriguing information On Gnutella 15% of clients contribute 94% of substance 63% of clients never reacted to an inquiry Didn't have "fascinating" information Anh Le + Tuong Nguyen

Slide 12

Gnutella:summary Hit rates are High adaptation to non-critical failure Adopts well and powerfully to changing associate populaces High system movement No assessments on length of inquiries No likelihood for fruitful questions Topology is obscure => calculation can't misuse it Free riding is an issue A huge part of Gnutella companions are free riders Free riders are dispersed equally crosswise over areas Often has share documents no one is keen on Anh Le + Tuong Nguyen

Slide 13

Gnutella talk Search sorts: Any conceivable string examination Scalability Search extremely poor regarding number of messages Probably seek time O(logn) because of little world property Updates fabulous: nothing to do Routing data: ease Robustness High, since numerous ways are investigated Autonomy: Storage: no limitation, peers store the keys of their records Routing: associates are focus of all sort of solicitations Global learning None required Anh Le + Tuong Nguyen

Slide 14

iMesh, Kazaa Hybrid of brought together Napster and decentralized Gnutella Super-peers go about as neighborhood hunt centers Each super-associate is like a Napster server for a little segment of the system Super-associates are consequently picked by the framework in view of their abilities (stockpiling, transmission capacity, and so forth.) and accessibility (association time) Users transfer their rundown of documents to a super-peer Super-peers intermittently trade document records Queries are sent to a super-peer for documents of premium Anh Le + Tuong Nguyen

Slide 15

Structured Overlay Networks/DHTs Chord, Pastry, Tapestry, CAN, Kademlia, P-Grid, Viceroy Set of Nodes Keys of Nodes Common Identifier Space Hashing Connect The hubs Smartly Keys of Values Keys of Values Hashing … Node Identifier Value Identifier Anh Le + Tuong Nguyen

Slide 16

hub A hub B hub C hub D → Node D : lookup(9) Each hub has a directing table Pointers to some different hubs Typically, a steady or a logarithmic number of pointers The Principle Of Distributed Hash Tables A dynamic circulation of a hash table onto an arrangement of collaborating hubs Basic administration: query operation Key determination from any hub Anh Le + Tuong Nguyen

Slide 17

DHT Desirable Properties Keys mapped uniformly to all hubs in the system Each hub keeps up data about just a couple of different hubs Messages can be steered to a hub effectively Node entry/takeoffs just influence a couple of hubs Anh Le + Tuong Nguyen

Slide 18

Chord [MIT] reliable hashing (SHA-1) allocates every hub and protest a m-bit ID IDs are requested in an ID hover going from 0 – (2 m - 1). New hubs expect openings in ID hover as indicated by their ID Key k is allocated to first hub whose ID ≥ k ��  successor(k) Anh Le + Tuong Nguyen

Slide 19

identifier hub 6 X key 0 1 7 6 2 5 3 4 2 Consistent Hashing - Successor Nodes 1 successor(1) = 1 identifier circle successor(6) = 0 6 2 successor(2) = 3 Anh Le + Tuong Nguyen

Slide 20

Consistent Hashing – Join and Departure W hen a hub n joins the system, certain keys already alloted to n's successor now ended up relegated to n. At the point when hub n leaves the system, the greater part of its relegated keys are reassigned to n's successor. Anh Le + Tuong Nguyen

Slide 21

0 1 7 6 2 5 3 4 Consistent Hashing – Node Join keys 5 7 keys 1 keys 2 Anh Le + Tuong Nguyen

Slide 22

0 1 7 6 2 5 3 4 Consistent Hashing – Node Dep. keys 7 keys 1 keys 6 keys 2 Anh Le + Tuong Nguyen

Slide 23

Scalable Key Location – Finger Tables To quicken queries, Chord keeps up extra steering data. This extra data is not vital for rightness, which is accomplished the length of every hub knows its right successor. Every hub n' keeps up a directing table with up to m passages (which is in truth the quantity of bits in identifiers), called finger table. The i th passage in the table at hub n contains the character of the primary hub s that succeeds n by no less than 2 i - 1 on the identifier circle. s = successor(n+ 2 i - 1 ). s is known as the i th finger of hub n , indicated by n.finger(i) Anh Le + Tuong Nguyen

Slide 24

0 1 7 6 2 5 3 4 Scalable Key Location – Finger Tables finger table keys begin succ. 6 For. 1 2 4 1 3 0 0+2 0 0+2 1 0+2 2 finger table keys For. begin succ. 1 1+2 0 1+2 1 1+2 2 3 5 3 0 finger table keys For. begin succ. 2 4 5 7 0 3+2 0 3+2 1 3+2 2 Anh Le + Tuong Nguyen

Slide 25

Chord enter area Lookup in finger table the uttermost hub that goes before key - > O(log n) bounces Anh Le + Tuong Nguyen

Slide 26

Node Joins and Stabilizations The most essential thing is the successor pointer. In the event that the successor pointer is guaranteed to be breakthrough, which is adequate to ensure rightness of queries , then finger table can simply be checked. Every hub runs an "adjustment" convention intermittently out of sight to overhaul successor pointer and finger table. Anh Le + Tuong Nguyen

Slide 27

Node Joins and Stabilizations "Adjustment" convention contains 6 capacities: make() join() settle() advise() fix_fingers() check_predecessor() When hub n first begins, it calls n . join (n " ), where n " is any known Chord hub. The join () work requests that n " locate the prompt successor of n. Anh Le + Tuong Nguyen

Slide 28

Node Joins – balance out() Each time hub n runs settle (), it approaches its successor for the it's ancestor p, and chooses whether p ought to be n's successor. settle () informs hub n's successor of n's presence, allowing the successor to change its antecedent to n. The successor does this lone in the event that it is aware of no nearer ancestor than n. Anh Le + Tuong Nguyen

Slide 29

nil Node Joins – Join and Stabilizatio n joins ancestor = nil n procures n s as successor by means of some n' n runs balance out n advises n s being the new antecedent n s obtains n as its forerunner n p runs balance out n p approaches n s for its ancestor (now n) n p secures n as its successor n p tells n will gain n p as its antecedent all forerunner and successor pointers are presently right fingers still should be altered, yet old fingers will in any case work n s pred(n s ) = n succ(n p ) = n s pred(n s ) = n p succ(n p ) = n p Anh Le + Tuong Nguyen

Slide 30

Node Failure s Key stride in disappointment recuperation is keeping up right successor pointers To accomplish this, every hub keeps up a successor-rundown of its r closest successors on the ring If hub n sees that its successor has fizzled, it replaces it with the principal live section in the rundown Success

SPONSORS