Wednesday, April 26, 2023

Wireless Network Security Market Size, Status, Revenue and Business Scenario-Juniper Networks, Inc., Cisco Systems




The global Wireless Network Security Market research report gives point to point breakdown along with the data of Wireless Network Security market analytical study, regional analysis, growth factors and leading companies. The research report about the market provides the data about the aspects which drive the expansion of industry. The Wireless Network Security market consists of large key companies who play a vital role in the production, manufacturing, sales and distribution of the products so that the supply & demand chain are met. A complex examination of the worldwide share of past as well as future with certain trends is catered to in current report.

The Wireless Network Security market is expected to register a CAGR of over 12.5% over the forecast period. The increasing consumer propensity toward adopting wireless devices in residential and commercial spaces is augmenting the wireless network security demand.


Top Key Players are covered in this report:

Juniper Networks, Inc., Cisco Systems, Inc., COMMSCOPE, Honeywell International Inc., Symantec Corporation (Broadcom), Fortinet, Inc., Sophos Ltd., Bosch Sicherheitssysteme GmbH (Robert Bosch GmbH), Aruba Networks (Hewlett Packard Enterprise Development LP), Extreme Networks, ADT, Motorola Solutions, Inc.

Market Trends:

The retail industry is witnessing growth in the last two years, especially with the massive expansion of the e-commerce industry across the globe. Hence, retailers are utilizing IoT solutions to improve their operational efficiency and enhance the customer experience to gain competitive advantages. With the increasing IoT use in retail space, the wireless security demand is expected to augment over the forecasted period. Retail companies are facing a landscape filled with growing and increasingly sophisticated threats, and the financial impact of these breaches is soaring.

Industry Overview:

The wireless network security market is fragmented; many players, such as Cisco Systems, Inc., Juniper Networks, Symantec Corporation, Fortinet, Inc., and Aruba Networks, occupy a smaller market share. The players are pursuing a mergers and acquisitions strategy to strengthen their product portfolio. Also, companies in the market are continually updating their existing product portfolio with the latest technologies.

Global Wireless Network Security Market: By Types

Consulting Operations
Managed Security Services
Security Operations
Others

Global Wireless Network Security Market: By Applications

Government & Utility
BFSI
Manufacturing
Telecom & IT
Retail
Aerospace & Defense
Healthcare
Others


See more... 




#protocols #routing #scheduling #servers #networkmarketing

#networkswitch #topology #ethernet #firewall #fiberoptics

#networkdiagram #network #gigabit #bandwidth #networkanalysis


Tuesday, April 18, 2023

How Does DHCP Work?

 


To fully understand the working of DHCP, we must look at the components of the DHCP Network:

DHCP serverThis is the central device that holds, assigns, and manages IP addresses. It can be a server, router, or SD-WAN appliance.

DHCP client: This is the endpoint that requests for IP addresses and can be installed on any type of peripheral device, although most are part of the default settings.

Subnets: These are parts of a more extensive network.

DHCP relay: This refers to devices like routers that acts as a middleman between clients and server, amplifying the messages to reach their destination goal.

The overall process and detailed mechanisms explain the working principle of Dynamic Host Configuration Protocol (DHCP). A DHCP system consists of two essential elements: the client and the server.

The clients are peripheral devices, while the DHCP server allocates IP addresses. The physical server often comes with a backup. Other devices function similarly to servers, such as SD-WAN appliances or the more common wireless access points.

It is natural to wonder how the end device initially connects to the server without an IP address, which is explained by an intricate system of exchanging messages and acknowledgments. To start, all modern devices have a DHCP client system installed during manufacturing, which is enabled by default.

The DHCP client is present in peripheral devices and computers and starts functioning as soon as the computer is turned on, and the operating system is running. Therefore, most devices can already find and connect to a DHCP network.

The entire process, although a bit complex, occurs automatically within seconds. The initialization process involves four message types which are:

1. DHCP discovery

The discovery message is the first message transmitted across the network to which clients are linked. The message type DHCPDISCOVER is sent widely across the network and not to a specific address, as the client is unaware of the server’s address.

The discovery message is a packet with a detailed destination (usually 255.255.255.255), showing that the client is part of that network. The packet may also contain a specific subnet broadcast address if configured. The discovery operation is a universal procedure that can fit into any DHCP server, provided the client is in that network.

Although there are no fixed destination addresses for individual servers and clients, the port number is a fixed parameter used in all DHCP communication between servers and clients. DHCP servers have a User Datagram Protocol (UDP) port number of 67, so listen for messages addressed to this port number. On the other hand, DHCP clients have the UDP port number 68 and only respond to messages sent to number 68.

2. DHCP offer

The DHCP is the reply sent by the server after receiving the discovery message. The message type is DHCPOFFER, which is broadcasted widely across the network using the UDP port number 68 so that any DHCP client connected to that network can pick it up. However, the message is targeted to just one client, and the server does this by attaching the MAC address of a specific client. Other clients ignore the message when they come across a non-self MAC address.

Included in the DHCP offer is an IP address that a client might accept using. The message also tells clients about the lease period for the DNS server addresses, the IP address, the IP address of the server, the default gateway, and the subnet mask. All this information ensures that the device is fully integrated into the network.

3. DHCP request

The DHCPREQUEST is a protocol that safeguards and guides the client in a network with multiple servers. Some networks, typically large ones, can have multiple servers, all capable of receiving the discovery message and sending out an offer to the client with an IP address. Because this is a possibility, the DHCP client is structured to send out a request message after receiving an offer which may be the first of many offers.

The DHCP request message confirms the choice of the client and usage of the IP address in the offer it received. The request message is transmitted with the server’s IP address embedded in the chosen offer message. The server that sent out the offer message chosen then receives the request message and certifies that the client’s IP address is unavailable for other devices. If other servers send out offer messages, they will return the offered IP addresses to their pool of addresses while waiting for another device that may need it.

4. DHCP acknowledgment

DHCP acknowledgment is the final step in the initialization process. It is a message sent by the server that supplied the IP address. The message is defined as “DHCPACK”, acknowledging that the IP address in question has been successfully leased to the client. The configuration is complete at this stage, and the client has a new, functional IP setting.

5. Control of lease time

DHCP is a dynamic protocol because it does not assign permanent IP addresses to the clients. While this may be perfect for some devices, DHCP attaches a specific lease time to each IP address. Once this period is up, the client can no longer use the address and is removed from the network. The concept of lease time serves to eliminate inactive clients.

For clients operating in the network, the lease is renewed halfway through the time, so the user does not experience any downtime. On the other hand, an inactive user cannot renew the lease and is removed from the network. Devices that are shut down also have their lease terminated immediately to increase the pool of available addresses.






#networkswitch #topology #ethernet #firewall #fiberoptics

#networkdiagram #network #gigabit #bandwidth #networkanalysis

#protocols #routing #scheduling #servers #networkmarketing


What Is DHCP (Dynamic Host Configuration Protocol)?

 



Dynamic Host Configuration Protocol (DHCP) is a protocol used by devices linked to the internet to guide the distribution and use of IP addresses. The internet exists heavily regulated by a series of guidelines, principles, and standards generally called protocols. All these protocols are standardized by the IETF (Internet Engineering Task Force). These public standards are critical because they ensure that devices and programs, irrespective of who created them, are compatible with others worldwide.

The Dynamic Host Configuration Protocol is not a program like any other protocol. It exists as a set of standards that lays out the procedures for requesting and sharing IP addresses over a computer network. The DHCP is used when creating address distribution functions.

Understanding what is DHCP

Dynamic Host Configuration Protocol (DHCP) interfaces between a server and client automatically designate an Internet Protocol address and other data to an Internet Protocol host. The DHCP makes it possible for a host to get the necessary Transmission Control Protocol (TCP/IP) configuration information from the server.

This DHCP server is the server for the given network and can assign IP addresses to the computers interacting on that network automatically. This automatic control of the designation of IP address is a function of the DHCP standard in response to the queries broadcasted by client computers.

DHCP is a network management standard employed in giving out IP addresses to computers and any other device or nodes on a network to foster better communication. Before using DHCP, network administrators were tasked with manually assigning Internet Protocol addresses to all the peripheral devices on a network, exposing the system to errors and placing a tremendous burden on the admins, especially in large networks.

Presently, admins can use the DHCP system in large networks such as those used by campuses, enterprises, Wide Area Network (WAN), etc., and in smaller networks such as residential networks.

DHCP network comprises two components, the central server with the DHCP component installed on it and client instances represented by computers or any internet-enabled device connected to the network. Even when devices change location, DHCP assigns new IP addresses. The DHCP standard has been integrated and can be used in versions 4 and 6 of the Internet Protocol.

Importance of DHCP

Internet Protocol (IP) address is a vital component that one must assign to all devices operating in a Transmission Control Protocol/ Internet Protocol network. DHCP is a set of rules that makes this process easier and less cumbersome.

With or without DHCP, one must assign IP addresses to devices because they contain the information used to accurately direct the transmission of data packets passed across any network. Without DHCP, computers moved to another network will have to undergo manual configuration to assign them new IP addresses. Similarly, the IP addresses assigned to computers that have left the network must be manually retrieved.

DHCP, however, does this entire process automatically. The DHCP server keeps a pool of available IP addresses from which it leases an address to any client on its radar. When using DHCP, the addresses are leased; they are temporarily assigned to a client rather than permanently associated with that device. This dynamic nature of the leased address means that anyone not in use is immediately returned to the pool and can be given to another device.

Benefits of the DHCP Network

Using DHCP, organizations can expect the following benefits:

Internet Service Providers (ISP) use DHCP to assign IP addresses to their users. This method of IP address designation is suitable as not all users are connected to the internet throughout the day. Consequently, addresses can be assigned for the duration of a connection and retained in the pool once the connection is severed.

DHCP reduces the occurrence of IP conflict to the barest minimum. Because IP addresses are centrally assigned, there are lesser chances of two devices giving the same address or errors in the address.

DHCP also provides support and easy transition for devices still using the BOOTP standard, which was in place before the Dynamic Host Configuration Protocol




#protocols #routing #scheduling #servers #gigabit #bandwidth

#topology #ethernet #firewall #fiberoptics #networkdiagram

#bluetooth #networkanalysis #networkmarketing #networkswitch


Monday, April 17, 2023

Gigabit Ethernet is key when there is a mine of information




Continuous, reliable communications are essential for the success of mining operations. They enable the transfer of crucial information across facilities, ensuring that fans, pumps, conveyors, and other key pieces of equipment operate correctly. When ineffective communications led to an increase in downtime at a mining complex in Mexico, CC-Link IE network technology offered a solid solution.

When the (broadcast) storm is coming

The facility utilises a Mitsubishi Electric MELSEC iQ-R PLC platform to control 35 variable frequency drives (VFDs), which in turn modulate the speed of fans, pumps and conveyors. While the automation components have been operating successfully for years, the mining complex was experiencing prolonged downtime associated with network failure. More precisely, approximately 20 hours were lost every month because of broadcast storms, data packet collisions, intermittent or even lost communications between enterprise level software and field devices.

To address these challenges, the mining company decided to replace its existing network technology with a more effective one. After evaluating and testing CC-Link IE open industrial Ethernet, the company was convinced this was the best solution to address their need for reliability and continuity. In particular, the mining specialist was impressed with how CC-Link IE’s unrivalled gigabit bandwidth could prevent congestions and ultimately downtime. In addition, the company found the diagnostic tools provided extensive and easy to use.

Gigabit Ethernet to ensure reliability

When Mitsubishi Electric started to support the mining company in the configuration of CC-Link IE, further benefits became apparent. Carlos SepĂșlveda, Sales Engineer at Mitsubishi Electric Mexico, explains: “It is possible to conduct network configuration and diagnostics from the same software used to program the iQ-R PLC, GX Works, which offers a single point of contact. This also streamlines any work on the infrastructure and architecture, as, if the topology is altered, e.g., by adding components, the platform automatically incorporates and reflects these changes.”

In addition, the installation of CC-Link IE helped the company reduce infrastructure costs. While the existing network technology required managed switches to ensure correct operations, these devices are optional with CC-Link IE, minimising capital expenditure (CAPEX) as well as expenses associated with their maintenance.

Since the new network has been put in place, no downtime associated with network failure has been experienced, maximising productivity. The gigabit bandwidth has also supported the mining complex to enhance responsiveness. Furthermore, it is playing a key role in getting the information technology (IT) and operational technology (OT) domains closer, hence opening a gateway to the Industrial Internet of Things (IIoT).

Sitting on a goldmine of data

Carlos SepĂșlveda comments: “The customer is extremely happy with CC-Link IE open industrial Ethernet. This technology is helping the mining company reduce the gap between IT and OT as well as make its operations ‘smart’, as it can now rely on a robust network that can manage a lot of data packages while offering high performance. These successful results are boosting the customer’s confidence in CC-Link IE – this is why they are already planning to use it in a new project.”

The mining specialist is also looking at futureproofing its facilities, by leveraging CC-Link IE TSN, the first open industrial Ethernet to combine gigabit bandwidth and Time-Sensitive Networking (TSN) to enhance determinism and convergence. Carlos SepĂșlveda adds: “The customer has been showing considerable interest in learning more about CC-Link IE TSN and what benefits it offers.”

John Browett, General Manager at the CLPA Europe, concludes with: “We are very pleased with the positive feedback we received from the mining company. It is a great example of the many benefits CC-Link technologies can offer to companies in different industries and how it can help them be a part of the digital transformation.”









#topology #ethernet #firewall #fiberoptics  #gigabit #bandwidth

#networkdiagram #network  #routing #scheduling #servers #protocols

#bluetooth #networkanalysis #networkmarketing #networkswitch


Tuesday, April 11, 2023

6 types of Network Topologies




Network Topology is essential to network configuration, as it determines the arrangement of a network and defines how nodes connect. Here are six common types of Network Topologies.


1. Bus Network Topology


A bus network topology consists of one flat network where all devices, known as stations, directly connect and transmit data between one another. From an intelligence perspective, bus networks are simplistic in nature when it comes to transmitting and retransmitting data.

When one station transmits data, the bus automatically broadcasts it to all other stations. Only the destination station accepts the transmission; all the other devices can recognize that the traffic isn't meant for them and ignore the communication.

Despite its simplicity, however, a bus topology is sometimes inefficient because it broadcasts data to all devices on a network. This can cause network congestion and reduce performance. As a result, bus networks are rarely used in modern enterprise environments.


2. Ring Network Topology


A ring topology is a configuration where every device directly connects to two other devices on a network, forming a continuous circle in a nonhierarchical structure. Data sent to a specific device transmits from device to device around the ring until it reaches its intended destination. In some cases, the data transmits in a single direction around the ring. In others, transport occurs bidirectionally.

In the early days of token ring networking, data would transmit around the ring, touching each endpoint network interface card until the data reached its destination. Nowadays, ring networks, such as Synchronous Optical Network, consist of network switches that form a ring.


3. Mesh Network Topology


A mesh topology is another nonhierarchical structure where each network node directly connects to all others. Mesh topologies ensure tremendous network resiliency because neither an outage nor loss of connectivity occurs if a connection goes down. Instead, traffic simply reroutes down a different path.

The caveat of using a mesh topology, however, is it adds to the complexity of the architecture. This also significantly increases the number of required network cables if the mesh uses wired links. To avoid cabling issues, enterprises typically relegate mesh networks to wireless systems, like Wi-Fi-based mesh deployments.


4. Star Network Topology


A star topology, also known as a hub-and-spoke topology, uses a central node -- typically, a router or a Layer 2 or Layer 3 switch. Unlike a bus topology, which simply broadcasts transmitted frames to all connected endpoints, a star topology uses components that have an extra level of built-in intelligence.

Layer 2 switches maintain a dynamic media access control (MAC) address table in star topology deployments. The table maps the MAC address of a device to its connected physical switchport. When a packet travels to a specific MAC address on a LAN, the switch performs a MAC address table lookup to determine the frame's destination port. This significantly reduces the amount of unnecessary broadcast traffic that can create a bottleneck.

Using a Layer 3 device as the star topology central node enables IP addressing and routing tables to target traffic forwarding and send it toward a single destination.


5. Tree Network Topology


A tree topology is a hierarchical structure where nodes link and arrange like a tree when drawn out in network diagram form. Network professionals typically deploy tree topologies with core, distribution and access layers.

At the top of the tree is the core layer, which is responsible for high-speed transport from one part of a network to another. The distribution layer in the middle of the tree performs similar transport duties as the core but on a more localized level. The distribution layer is also where network administrators apply access control lists and quality of service policies. At the bottom of the tree is the access layer, which is where endpoint devices connect into the network.

Leaf-spine network topology is a form of tree topology that has become increasingly popular in the data center. A leaf-spine topology sticks to the hierarchical structure of a tree model but has only two layers, as opposed to the traditional three. Leaf-spine network switch components are responsible for high-speed transport across the entire data center; leaf switches fully mesh to spine nodes and are responsible for connecting application, database and storage servers to the data center.


6. Hybrid Network Topology


Corporate networks often use more than one type of network topology. One topology may be more preferable when compared with another, depending on factors related to performance, reliability and cost. For example, a network professional may configure a wireless LAN that uses a star-based topology for most network connections but also use a wireless mesh network in certain situations, such as when a network cable can't connect to an access point.








#ethernet #firewall #fiberoptics #bandwidth  #topology #servers

#networkanalysis #scheduling #networkmarketing  #Protocols
 
#networkswitch #networkdiagram #network #gigabit #routing
 

Saturday, April 8, 2023

Network bandwidth vs. throughput: What's the difference?

 
Bandwidth and throughput both indicate network performance. The terms are often used together, but bandwidth refers to capacity, while throughput details how much data actually transmits.




Bandwidth and throughput both concern network data. Network bandwidth defines how much data can possibly travel in a network in a period of time. Network throughput refers to how much data actually transfers during a period of time. Bandwidth and throughput are also sometimes conflated with latency, which refers to the speed at which data travels across the network to its destination.


What is network bandwidth?


When thinking about bandwidth, the key word is capacity. Bandwidth refers to the maximum amount of data that could, theoretically, travel from one point in the network to another in a given time.

Bandwidth is a limited resource. Depending on their capacity, networks can handle only a certain amount of bandwidth, and some devices consume more bandwidth than others. Insufficient bandwidth can lead to network congestion, which slows connectivity. Network professionals can compensate for these factors by calculating the bandwidth requirements for devices and adjusting bandwidth allocation as needed.

Bandwidth measurement units include bit, kilobit, megabit (Mb) and gigabit (Gb). Say, for example, a network has a bandwidth of 1 Gb per second (Gbps). This means 1 Gb is the maximum amount of data that could travel between links in one second, in an ideal situation. Yet, most networks typically don't operate in ideal situations.

Networks sometimes experience slow connectivity, limited range, outages and other issues that diminish performance. In these situations, it takes longer for a data packet to travel across the network.


What is network throughput?


When thinking about throughput, the key word is amount. Throughput refers to the actual amount of data transmitted and processed throughout the network. If bandwidth describes the theoretical, throughput describes the empirical, and the numbers for each metric usually differ.

Because networks often experience issues that hinder performance, throughput often differs from the maximum network bandwidth. Throughput shows the data transfer rate and reflects how the network is actually performing. Unless the network operates at max performance, the throughput is lower than the bandwidth.

Throughput is measured with the same bitrate units as bandwidth. A network could have a bandwidth of 1 Gbps, which means it's capable of handling 1 Gbps. But, depending on the circumstances, its throughput could be only 500 Mbps, with the network processing half its capacity.




Where does latency fit with bandwidth and throughput?


Bandwidth and throughput are often used to describe network speed, but speed mostly depends on the latency of the network. Latency is a measurement of the amount of time it takes a data packet to travel from one point in the network to another, from sender to receiver.

Sometimes, latency is measured as round-trip time and includes the time it takes for a packet to travel from its destination point back to its origin point. If the latency is low, this indicates a delay, which is sometimes called lag.

Bandwidth and latency don't necessarily affect one another, but network professionals use the two metrics to analyze network performance. Latency issues are often more apparent in high-bandwidth networks. For example, if a network has 1 Gbps of bandwidth yet it takes two seconds for 1 Gb to travel between links, a network professional can determine an issue with low latency because the network performance is inadequate.

Throughput and latency, on the other hand, have an inversely proportional relationship. High-throughput networks have high amounts of data traveling between links, meaning the networks have low latency or little lag affecting the speed. Low-throughput networks have less data traveling and processing between links, meaning they may have high latency causing a delay in how long it takes data to reach its destination.


Why are network bandwidth and throughput important?


Bandwidth and throughput, as well as latency, are important metrics that network professionals use to monitor network performance. When network professionals measure bandwidth, they gain an understanding of the capabilities of their networks. Throughput serves as an indication of how well the network performs to that standard. Other metrics, such as latency, also indicate the network performance and can influence the network bandwidth and throughput numbers.







#networkanalysis #scheduling #networkmarketing  #Protocols
 
#networkswitch #networkdiagram #network #gigabit #routing
 
#ethernet #firewall #fiberoptics #bandwidth  #topology #servers

Thursday, April 6, 2023

Firewall And Key Uses of Firewall





Firewalls can be thought of as gated borders or gateways that regulate the movement of web content in a private community. The phrase refers to the idea that physical barriers can contain a fire until emergency services can put it out. According to an assessment, community security firewalls are for website visitor control and are typically designed to slow the spread of internet threats.

Firewalls create “choke factors” to direct website visitors to a point where they are then evaluated based on a strict set of programmed parameters and taken appropriate action. Some firewalls also keep track of the connections and site visitors in audit logs to show what has been permitted or blocked.

Firewalls are frequently used to secure a private network’s perimeters. Firewalls are one safety tool in the larger subset of consumer access control as a result. These boundaries are typically set up on either dedicated network computers or the user computers and other endpoints themselves.

People and organizations need to secure their data due to the wide variety of cybercrimes that are increasing every day. But enforcing the same is difficult in many situations. One such security measure that allows you to protect your community and tool from outsiders is a firewall.




The main applications for firewalls include:

• Both corporate and client environments can use firewalls. Firewalls are installed on the network perimeter of businesses to protect against both outside threats and insider threats. They can be included in cybersecurity devices for modern agencies as part of a Security Information and Event Management (SIEM) method.

Firewalls have the ability to perform logging and audit functions by identifying patterns and enhancing policies through the use of updating them to counteract immediate threats.

• Cable modems with static IP addresses, home networks, and Digital Subscriber Lines (DSL) can all use firewalls. Firewalls are capable of easily removing unwanted traffic and alerting users to intrusions.






#networkanalysis  #networkmarketing  #networkswitch  #gigabit  #bandwidth

#topology  #ethernet  #firewall  #fiberoptics #networkdiagram  #network

#protocols  #routing  #scheduling  #servers

Monday, April 3, 2023

AWS launches new chips, replacement for TCP

 



Amazon Web Services has introduced a new CPU customized for high-performance computing (HPC) and the next generation of its Nitro smart networking chip, plus instances that take full advantage of the hardware.

The Arm-based CPU is called the Graviton3E and has been optimized for floating point math, key in HPC, the company announced at AWS re:Invent conference. Amazon said Hpc7g instances powered by the new Graviton3E chips offer up to double the floating point and vector performance compared to the current generation of instances.

The vast datasets that accompany HPC need to be moved around, so Amazon also introduced the fifth generation of its Nitro smartNICs, offering up to twice the network bandwidth and up to 50% higher packet processing-per-second performance compared to current generation networking-optimized instances.

Accompanying the new chip is a new Elastic Compute Cloud (EC2) instance, the networking optimized C7G. The C7G uses the Graviton 3 processor and is designed to deliver optimized networking performance for the most network-intensive workloads. It offers 200Gbps. of throughput and up to 50% higher packet-processing performance over the previous network optimized instance.


Replacing TCP

To accelerate data movement for HPC workloads, AWS created the elastic fabric adapter (EFA) for scalable, high-speed inter-node communication. As part of EFA is another AWS creation, Scalable Reliable Datagram (SRD), an alternative to the widely used TCP Ethernet protocol.

DeSantis said AWS internal networking relies on multipath routing, but creaky old TCP uses a single path and isn’t too good at noting if performance is compromised. TCP also transmits packets in order, which can cause latency.

SRD makes use of multipath routing so doesn't transmit packets to arrive order, but it reorders the packets at the receiving end. DeSantis said it will retransmit dropped packets "in microseconds, not milliseconds" and speed up networks hosted on the AWS cloud.

And now all of AWS will gain use of SRD, because AWS has built a new version of the network driver offered with EC2 instances called ENA Express to offer native SRD support. So SRD is being rolled out site-wide.


#protocols #bandwidth #topology #bluetooth


Billions of Messages Per Minute Over TCP/IP


One of the most important issues when building distributed applications is that of data representation. We must ensure that data sent by a component to a “remote” component (i.e. one that is part of a different process) is received correctly, with the same values. This may seem straightforward but remember that the communicating components may have been written in completely different languages.

Things are complicated further when we consider that different hardware/system architectures are likely to have different ways of representing the “same” values. Simply copying bytes from one component to another is not enough. Even in Java, where we may consider ourselves “protected” from this kind of situation, there is no requirement that two different JVM implementations or different versions from the same vendor use the same internal representation for objects.




The most common solution to this problem is to define a “canonical” representation of data that is understood between processes - even between programming languages - and have data translated into this format before sending and then back into the receiver’s own format once received. Several such “wire formats” exist, ranging from text-based standards such as YAML, JSON or XML, to binary options such as Protobuf that incorporate metadata or are completely raw.

At Chronicle Software we have developed a number of libraries to support the building of applications that are optimised for low latency messaging, primarily in the financial services industry. We provide bespoke solution development and consultancy to clients, the majority of whom are in the financial area, from all over the world.

As software architectures increasingly follow a distributed, event-based approach, we are looking to expand the space in which Chronicle Wire can be used, to support TCP/IP interconnections between components. This article provides a basic overview of the features that will be available and some simple examples of how they can be used.

We are already seeing some startling performance figures for this basic approach - in a benchmark described in Peter Lawrey’s article Java is Very Fast, If You Don't Create Many Objects, for example, which is built upon loopback TCP/IP networking on a single machine, we were able to pass over 4 billion events per minute.

We benchmarked this against similar technologies used for data exchanges, specifically Jackson and BSON. In a test processing 100 byte messages, the 99.99 percentile processing per-message processing time was about 10.5 microseconds with Chronicle Wire compares to 1400 microseconds with Jaskcon/BSON. This is a significant difference.

Here we present an introduction to the key concepts used to realise this. We are, however, designing these features to be flexible as well as performant, and future articles will show some more advanced use cases.


#protocols #bandwidth #topology #bluetooth