Wednesday, May 31, 2023

What's the difference between a MAC address and IP address?

 



Every computer or device on the internet has two types of addresses: its physical address and its internet address. The physical address -- which is also called a media access control, or MAC, address -- identifies a device to other devices on the same local network. The internet address -- or IP address -- identifies the device globally. A network packet needs both addresses to get to its destination.

MAC address vs. IP address: What's the difference?

Both MAC addresses and IP addresses are meant to identify a network device, but in different ways. Some of the main differences between a MAC address and an IP address include the following:

   >   local identification vs. global identification;
   >   Layer 2 vs. Layer 3 operation;
   >   Physical address vs. logical address;
   >   Number of bits;
   >   Address assignment and permanence; and
   >   Address formatting.

A MAC address is responsible for local identification and an IP address for global identification. This is the primary difference between a MAC address and IP address, and it affects how they differ in their number of bits, address assignment and interactions. The MAC address is only significant on the LAN to which a device is connected, and it is not used or retained in the data stream once packets leave that network.

Any piece of internet software, such as a web browser, directs data to a destination on the internet using the destination's IP address. That address is inserted into the data packets that the network software stack sends out. People rarely use the address numbers directly, instead using DNS names, which the application translates into the matching number.

Internet routers move the packets from the source network to the destination network and then to the LAN on which the destination device is connected. That local network translates the IP address to a MAC address, adds the MAC address to the data stream and sends the data to the right device.




Another difference between a MAC address and IP address is the way the addresses are assigned. An IP address is bound to a network device via software configurations, and network administrators can change it at any time.

Local network switches maintain Address Resolution Protocol (ARP) tables that map IP addresses to MAC addresses. When a router sends the switch a packet with a destination specified by an IP address, it uses the ARP table to know which MAC address to attach to the packet when it forwards the data to the device as Ethernet frames.

What is a MAC address?

Media access control refers to the piece of hardware that controls how data is pushed out onto a network. In the OSI reference model for networking, the MAC is a Layer 2 -- or data link layer -- device, and the MAC address is a Layer 2 address. In the current internet era, most devices are connected physically with Ethernet cables or wirelessly with Wi-Fi. Both methods use MAC addresses to identify a device on the network.

A MAC address consists of 12 hexadecimal digits, usually grouped into six pairs separated by hyphens. MAC addresses are available from 00-00-00-00-00-00 through FF-FF-FF-FF-FF-FF. The first half of the number is typically used as a manufacturer ID, while the second half is a device identifier. In nearly all enterprise network devices today, whether Wi-Fi or Ethernet, this number is hardcoded into the device during the manufacturing process.

Each MAC address is unique to the network card installed on a device, but the number of device-identifying bits is limited, which means manufacturers do reuse them. Each manufacturer has about 1.68 million available addresses, so when it burns a device with a MAC address ending in FF-FF-FF, it starts again at 00-00-00. This approach assumes it is highly unlikely two devices with the same address will end up in the same local network segment.

What is an IP address?

IP controls how devices on the internet communicate and defines the behavior of internet routers. It corresponds to Layer 3, the network layer, of the OSI reference model. The internet was initially built around IP version 4 (IPv4) and is in transition to IPv6.

An IP address identifies a device on the global internet, acting as the device's logical address to identify that network connection. An IPv4 address consists of 32 bits, usually written as four decimal numbers, or a dotted quad. Possible values range from 000.000.000.000 through 255.255.255.255, although many possible addresses are disallowed or reserved for specific purposes.

The address combines network identification and device identification data. The network prefix is anywhere from eight to 31 bits, and the remainder identify the device on the network. Steady, rapid growth in the number of internet-connected devices has led to the looming exhaustion of the IPv4 address list, one of several reasons for the development of IPv6.

An IPv6 address consists of 128 bits, with the first 64 reserved for network identification and the second 64 dedicated to identifying a device on the network. The address is written as eight sets of four hexadecimal digits separated by colons -- for example, FEDC:BA98:7654:3210:0123:4567:89AB:CDEF. Happily, many conventions are available to shorten an IPv6 address when writing it.





#networkswitch #networkdiagram #network #gigabit #bandwidth #topology
#firewall #tcp #dhcp #networkspeed #networking #scheduling #protocols
#fiberoptics #networkmarketing #servers #bluetooth #networkanalysis
#macaddress #ipaddress #neuralnetworks #server #ethernet #routing





Tuesday, May 30, 2023

6th Edition of International Research Awards on Network Protocols | 23-2...





6th Edition of International Research Awards on Network Protocols | 23-24 June 2023 | San Francisco, United States ( Hybrid ).

Network Protocols is the Researchers and Research organizations around the world in the motive of Encouraging and Honoring them for their Significant contributions & Achievements for the Advancement in their field of expertise. Researchers and scholars of all nationalities are eligible to receive ScienceFather Network Protocols Awards. Nominees are judged on past accomplishments, research excellence, and outstanding academic achievements.


Visit Our Awards Nomination Link : https://x-i.me/prinom Visit Our Registration Link : https://x-i.me/prireg2



#networkswitch #networkdiagram #network #gigabit #bandwidth #topology #ethernet
#firewall #tcp #dhcp #networkspeed #networking #scheduling #protocols #routing
#fiberoptics #networkmarketing #servers #bluetooth #networkanalysis



6th Edition of International Conference on Network Protocols | 23-24 Jun...





6th Edition of International Conference on Network Protocols | 23-24 June 2023 | San Francisco, United States ( Hybrid ).

Network Protocols Conference organized by ScienceFather group. ScienceFather takes the privilege to invite speakers, participants, students, delegates, and exhibitors from across the globe to its Global Conference on Network Protocols conferences to be held in the Various Beautiful cites of the world.



Visit Our Nomination Link : https://x-i.me/primemb Visit Our Registration Link : https://x-i.me/prireg1 Get Connected Here: =================


#networkswitch #networkdiagram #network #gigabit #bandwidth #topology #ethernet 
#firewall #tcp #dhcp #networkspeed #networking #scheduling #protocols #routing 
#fiberoptics #networkmarketing #servers #bluetooth #networkanalysis


Wednesday, May 24, 2023

Social media privacy concerns





See more information: –  network.sciencefather.com







#PrivacyMatters#DataProtection#OnlinePrivacy#PrivacyAwareness#SocialMediaPrivacy
#DigitalPrivacy#PrivacyRights#PrivacyConcerns#DataSecurity#ProtectYourPrivacy
#PrivacySettings #PrivacyAware#PrivacyIsImportant#PrivacyFirst#PrivacyAdvocate
#PrivacyAwarenessMonth #PrivacyIssues#PrivacyThreats#PrivacyViolation




Tuesday, May 23, 2023

A method for designing neural networks optimally suited for certain tasks

 



Neural networks, a type of machine-learning model, are being used to help humans complete a wide variety of tasks, from predicting if someone's credit score is high enough to qualify for a loan to diagnosing whether a patient has a certain disease. But researchers still have only a limited understanding of how these models work. Whether a given model is optimal for certain task remains an open question.

MIT researchers have found some answers. They conducted an analysis of Neural Networks and proved that they can be designed so they are "optimal," meaning they minimize the probability of misclassifying borrowers or patients into the wrong category when the networks are given a lot of labeled training data. To achieve optimality, these networks must be built with a specific architecture.

The researchers discovered that, in certain situations, the building blocks that enable a neural network to be optimal are not the ones developers use in practice. These optimal building blocks, derived through the new analysis, are unconventional and haven't been considered before, the researchers say.

In a paper published this week in the Proceedings of the National Academy of Sciences, they describe these optimal building blocks, called activation functions, and show how they can be used to design neural networks that achieve better performance on any dataset. The results hold even as the neural networks grow very large. This work could help developers select the correct activation function, enabling them to build neural networks that classify data more accurately in a wide range of application areas, explains senior author Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS).

"While these are new activation functions that have never been used before, they are simple functions that someone could actually implement for a particular problem. This work really shows the importance of having theoretical proofs. If you go after a principled understanding of these models, that can actually lead you to new activation functions that you would otherwise never have thought of," says Uhler, who is also co-director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and a researcher at MIT's Laboratory for Information and Decision Systems (LIDS) and its Institute for Data, Systems and Society (IDSS).

Activation investigation

A neural network is a type of machine-learning model that is loosely based on the human brain. Many layers of interconnected nodes, or neurons, process data. Researchers train a network to complete a task by showing it millions of examples from a dataset.

For instance, a network that has been trained to classify images into categories, say dogs and cats, is given an image that has been encoded as numbers. The network performs a series of complex multiplication operations, layer by layer, until the result is just one number. If that number is positive, the network classifies the image a dog, and if it is negative, a cat.

Activation functions help the network learn complex patterns in the input data. They do this by applying a transformation to the output of one layer before data are sent to the next layer. When researchers build a neural network, they select one activation function to use. They also choose the width of the network (how many neurons are in each layer) and the depth (how many layers are in the network.)

"It turns out that, if you take the standard activation functions that people use in practice, and keep increasing the depth of the network, it gives you really terrible performance. We show that if you design with different activation functions, as you get more data, your network will get better and better," says Radhakrishnan.

He and his collaborators studied a situation in which a neural network is infinitely deep and wide—which means the network is built by continually adding more layers and more nodes—and is trained to perform classification tasks. In classification, the network learns to place data inputs into separate categories.




See more information: –  network.sciencefather.com






#NeuralNetworks #DeepLearning #ArtificialIntelligence #MachineLearning #AI #network
#DataScience #Neurons #ComputerVision #NaturalLanguageProcessing #protocols
#TensorFlow #PyTorch #ConvolutionalNeuralNetworks #RecurrentNeuralNetworks
#BigData #PatternRecognition #MLAlgorithms #NeuralNetworkArchitecture



Saturday, May 20, 2023

The Need for Speed: Network Automation or Manual Changes?




Network engineers who learned the command line interface (CLI) for configuring network devices often prefer to make changes using manual processes, citing the speed with which changes can be applied. Is this really the case? 

The Manual Change Process

The manual change process isn’t just about typing commands directly into network devices. The smart engineer creates the changes in a text document and uses cut and paste to apply configuration updates. The text document is needed anyway for the change control board (CCB) to review the proposed change. Making network changes without going through the CCB is not a valid process for the purposes of this comparison with automation. A vast majority of companies use the CCB to try to reduce the number of network outages due to configuration changes, which is the most frequent source of outages (up to 80% reported by some analysts).

Most of the manual change processes I’ve seen have omitted the pre-change validation that the network was functioning correctly prior to implementing the change. And frequently the post-change validation is simply checking the output of the show configuration or show run. Instead, pre-change and post-change validation should be checking subsystems that are expected to be impacted by the change, such as routing protocols, connectivity, and neighbor relationships.

The manual process may indeed be faster than an automated process for changes that need to be applied to a small set of devices, particularly if short-cuts are taken. But I maintain that the process of creating the data to drive automation doesn’t take that much more time than a thorough manual process, even for changing a few devices.

Automating the Change Process

Automation offers the opportunity to improve the configuration change process. Using a code repository forces documentation of the proposed changes, and a peer review catches silly mistakes early in the process. Both steps increase the quality of the changes and fewer network outages.

Of course, automation becomes more compelling as the number of devices goes up. For example, configuring the same quality of service (QoS) settings across an entire enterprise with more than a few devices or changing BGP policies on a set of internet border routers. It’s a relief to not have to repeat mind-numbing and error-prone cut-n-past operations across a long list of devices.

Just a warning, though: You should start by automating one change on a subset of devices, validating that the change does what you want and that the pre-change and post-change validation tests are accurate. Once the automation is validated, the number of devices which can be changed via automation can be increased.

Learning Automation

I can hear objections now: But I don’t know anything about automation! This comes from network engineers who learned an arcane command line interface syntax and complex network protocols. Ansible automation is certainly within the reach of anyone who has learned enough to successfully use the CLI. If that approach isn’t viable, then there are non-programming tools available, like Gluware.

It helps to understand the overall approach to automation so that you do better than simply replicating manual processes. Look for courses on sites like Pluralsight and O’Reilly that provide that level of understanding and reasonable pricing. Another approach is to read automation books like The Phoenix Project, which teaches three basic tenants of DevOps:

       Adopt a flow of work that uses small batch sizes. In networking, it is limiting the scope of changes (known as the blast radius). Start small and expand as the automation is validated that it does what you want. Limit the work in progress to small batches. This allows work to flow from one part of the process to the next.

   Fast feedback uses small loops and faster correction when something’s wrong. Peer reviews of proposed changes early in the cycle are one of the feedback mechanisms. Using small batches supports fast feedback.

      Adopt a culture that fosters education and experimentation in which repetition and practice generates mastery.

You can also get started by simply automating read-only processes. Build a few pre-change and post-change network validation tests. This will get you started in an environment where not everyone is on board with automation or doesn’t have the necessary skills.

Successful adoption of automation depends on a basic culture shift. The entire network team needs to adopt automation workflows and needs to understand how to use the automation systems. Anyone who implements changes manually creates technical debt that needs to be addressed before the automation systems can resume control. It will take some effort to streamline the workflows so that automation is faster than the well-known manual processes, but it is certainly possible.



See more information: –  network.sciencefather.com






#networkefficiency #networkreliability #networkperformance #networkresilience #networksecurity
#networkavailability #networkscalability #networkrecovery #networksustainability #network
#networkloadbalancing #networkbandwidth #networklatency #networkvirtualization #protocols
#networkautomation #networkprotocols #networking #networkengineering #networkingstandards

International Research Awards on Network Protocols






See more information: –
  network.sciencefather.com

#networkautomation #networkprotocols #networking #networkengineering #networkingstandards
#networkefficiency #networkreliability #networkperformance #networkresilience #networksecurity
#networkavailability #networkscalability #networkrecovery #networksustainability
#networkloadbalancing #networkbandwidth #networklatency #networkvirtualization

 

What is a WAN? Wide-area network definition and examples

 







How is a WAN different from a LAN?


A Local Area Network (LAN) is confined to a relatively small area. In the business world, LANs are generally limited to a single building or a small campus. In a LAN topology, all the devices that end users need to access are connected by switches and routers. Your home Wi-Fi is also a LAN, where you can connect multiple devices, including laptops, desktops, printers and smart home devices via a central router.

When your network requires access to resources that are not available on the LAN, an external link is added to the router. So, while a LAN connects you to local resources on your network, a WAN connects multiple networks together to share resources.]

In the case of a company that has a corporate headquarters and multiple branch offices scattered around the world, the WAN connects multiple LANs, While LANs typically connect end users through Ethernet technology, WANs can employ a variety of transport methods.


What is a private WAN?





What is a cloud WAN?




What is an MPLS WAN?




What is a wireless WAN?

A wireless WAN deploys cellular broadband radio devices to connect with a series of radio towers, referred to as cells, which act as base stations to convert the wireless data packets that travel across private or cloud WANs. (It is also possible to connect multiple devices to perform point-to-point communication using a wireless transportation layer.)

The wireless network infrastructure is designed to support millions of connections across a nationwide footprint. As the endpoint transceiver passes beyond the range of a cell, the network automatically hands the connection off to the next, providing uninterrupted connectivity. Since the cellular network is already established, a wireless WAN can be deployed quickly and relatively inexpensively.


Future of WANs

WAN technology has come a long way since the early days of circuit-switched telephone lines and 2400 baud modems. Today, leased lines, wireless, MPLS, and the public internet makes it possible for you to videoconference on demand from your telephone to anyone around the world, backup your data to another city, manage the operations of a self-driving vehicle, and work from any place you can get a radio signal.

WANs aren’t just limited to Earth. NASA and other space agencies are working to create a reliable "interplanetary internet," which aims to transmit test messages between the International Space Station and ground stations. The Disruption Tolerant Networking (DTN) program is the first step in providing an internet-like structure for communications between space-based devices, including communicating between the Earth and Moon, or other planets.




See more Information: -  network.sciencefather.com




 



#networkloadbalancing #networkbandwidth #networklatency #networkvirtualization
#networkautomation #networkprotocols #networking #networkengineering #networkingstandards
#networkefficiency #networkreliability #networkperformance #networkresilience #networksecurity
#networkavailability #networkscalability #networkrecovery #networksustainability



Thursday, May 18, 2023

The six types of virtualization in cloud computing

 



Virtualization makes it possible for operating systems, applications and data storage from software or hardware to be represented in virtual form. Given the number of businesses moving their resources to the cloud, it becomes increasingly expedient for cloud providers to utilize virtualization to configure their services according to the individual needs of their customers, making their services more scalable and flexible.

1. Storage virtualization

Storage management is one area of cloud computing that has been improved in recent years through virtualization. Storage virtualization involves collecting and merging several physical storage units and rendering them as one storage cluster over a network.

This type often comes in handy for enterprises and individuals aiming to expand and scale their storage without investing in physical storage facilities. In addition, storage virtualization improves effective storage management by ensuring that multiple storage points can easily be accessed from a single repository.

2. Network virtualization

Network virtualization is used to merge several networks into one, duplicate the resources of a network and run an interconnection between virtual machines.

Through network virtualization, virtual networks can be separated and deployed, with each having its unique configuration without affecting the other. For example, in creating a virtual network, you can share your bandwidths and assign them separately to different channels where they are most needed. In addition, network virtualization allows different users to run the same virtual network on a physical network without causing latency issues on the network.

3. Application virtualization

The main goal of application virtualization is to ensure that cloud users have remote access to applications from a server. The server contains all the information and features needed for the application to run and can be accessed over the internet. As a result, you do not need to install the application on your native device to gain access. Application virtualization offers end-users the flexibility to access two different versions of one application through a hosted application or packaged software.

4. Desktop virtualization

Desktop virtualization is typically used to remotely host end users’ operating systems on a server or data center. This type also allows users to access their desktops using different machines.

Virtualizing your desktops gives users the flexibility to work on multiple OS based on the demands of a project. Besides flexibility, desktop virtualization offers portability, user mobility, and software updates and patch management.

5. Data virtualization

Sometimes, organizations are faced with the challenge of analyzing data pulled from different sources. Data virtualization helps to solve this by ensuring that data culled from multiple sources is analyzed collectively to enhance productivity.

In addition, storage virtualization also ensures that organizations can centrally manage all their data stored in multiple sources such as Google Analytics, Excel files and HubSpot reports, and renders them as a single data set.

6. Server virtualization

Server virtualization helps organizations to partition server resources in a way that ensures full utilization of the resources. One of its primary goals is that it helps in breaking up huge physical servers into several instances of virtual servers. This makes it possible for each server to be masked and run as a standalone server.

Through server virtualization, organizations can scale their server resources without investing in physical servers and deploy them depending on user requests, needs and computing power.

Benefits of virtualization in a cloud environment

Virtualization in cloud computing has several benefits and has become essential as computing demands surge. One notable benefit is that an entire system is saved from a possible collapse when there is a system crash in one part of the system. At the same time, virtualization ensures that IT environments are easily protected from viruses and bugs when testing new software programs.

Furthermore, virtualization makes it easy for data transfer as organizations can move data between virtual devices and servers, thereby saving time. In addition, with virtualized desktops and storage, organizations can also move an entire machine without recourse to any physical infrastructure. Doing this improves efficiency, productivity and cost-effectiveness in managing cloud environments.


See more Information: -  network.sciencefather.com






#networkprotocols #networking #networkengineering #networksecurity #networkefficiency
#networkreliability #networkperformance #networkresilience #networkingstandards
#networkingbestpractices #topology #gigabit #bandwidth #firewall #fiberoptics
#topology #ethernet #routing #protocols #scheduling #servers #webs


Monday, May 15, 2023

Introducing Named Data Networking












Named Data Networking started in 2010 as an NSF research project that was used to create the architecture for the future Internet. Today, it completely changes the paradigm used by traditional networks.

Named Data Networking is a network service that has been evolving the Internet’s host-based packet delivery model. NDN directly retrieves the objects by name in a secure, reliable and efficient way. The prime objective is to secure information from the users all the way to the data and not just from the host or client-server communication, what transport layer security (TLS) normally does.

Unlike TLS, which carries users all the way to the host or container, NDN takes us to the next level and secures data from the user to the actual data. TLS only encrypts the channel and does not encrypt from the user through the application to the data.

When you are encrypting at the data level, you no longer need middleboxes. Everything is done in a single software stack that can be run everywhere.


Today’s routers are not stateful. This is the reason why there are “middle” boxes in the network such as wide area network (WAN) optimizers, firewalls and load balancers, all of which have state.

However, NDN puts state back into the routers. You take the metadata, the data schema that is used to describe the data at the application layer and place it into the network layers. This way, at the networking layer, you are routing based on the hierarchy of names as opposed to IP addresses.

Since the metadata is cascading down into the network level, so now it can be cached and distributed. When you are routing a datagram, you are using the metadata for routing as opposed to an IP address. This enables the use of the same name at both; the application/data layer and network layer, creating a hierarchical naming schema. Also, by creating routers that have state, you can cache the data and provide additional features across disparate networks such as multipath networking.


Instead of using IP and domain name system (DNS), you are embedding name into the routing. Today all the naming is done through DNS. DNS translates a name into an IP address and routing is done based on IP addresses.

With NDN, you are managing the routing and security natively with names while getting rid of the IP addresses. It uses its own routing protocol which has similar properties to the OSPF link state protocol.

One of its routing protocols is named as link state routing. It’s an open source code that you can download as an instance to run on a virtual server, IOS, and Android device. At the same time, it’s still possible to have IP with NDN. You can have IP in the middle and NDN can run on top of IP. So if you have an IP network and you run NDN as the overlay, it could run on a Kubernetes container, or open source Linux stack but not on proprietary Cisco or Juniper equipment.











See more information: –  network@sciencefather.com





#NamedDataNetworking #NDN#FutureInternet #ContentCentricNetworking #networkawards
#InternetArchitecture #NetworkedInformation #DataCentricNetworking #networkingevents
#NextGenerationInternet #InternetOfThings #networkdiagram #InformationCentricNetworking
#network #protocols #networkanalysis #networkmarketing #networkswitch #networkingevent
#topology #ethernet #routing #scheduling #servers #networksecurity #networkingtips