An Evaluation of Text Mining Techniques in Sampling of Network Ports from IBR Traffic
- Chindipha, Stones D, Irwin, Barry V W, Herbert, Alan
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2019
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427630 , vital:72452 , https://www.researchgate.net/profile/Stones-Chindi-pha/publication/335910179_An_Evaluation_of_Text_Mining_Techniques_in_Sampling_of_Network_Ports_from_IBR_Traffic/links/5d833084458515cbd1985a38/An-Evaluation-of-Text-Mining-Techniques-in-Sampling-of-Network-Ports-from-IBR-Traffic.pdf
- Description: Information retrieval (IR) has had techniques that have been used to gauge the extent to which certain keywords can be retrieved from a document. These techniques have been used to measure similarities in duplicated images, native language identification, optimize algorithms, among others. With this notion, this study proposes the use of four of the Information Retrieval Techniques (IRT/IR) to gauge the implications of sampling a/24 IPv4 ports into smaller subnet equivalents. Using IR, this paper shows how the ports found in a/24 IPv4 net-block relate to those found in the smaller subnet equivalents. Using Internet Background Radiation (IBR) data that was collected from Rhodes University, the study found compelling evidence of the viability of using such techniques in sampling datasets. Essentially, being able to identify the variation that comes with sampling the baseline dataset. It shows how the various samples are similar to the baseline dataset. The correlation observed in the scores proves how viable these techniques are to quantifying variations in the sampling of IBR data. In this way, one can identify which subnet equivalent best represents the unique ports found in the baseline dataset (IPv4 net-block dataset).
- Full Text:
- Date Issued: 2019
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2019
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427630 , vital:72452 , https://www.researchgate.net/profile/Stones-Chindi-pha/publication/335910179_An_Evaluation_of_Text_Mining_Techniques_in_Sampling_of_Network_Ports_from_IBR_Traffic/links/5d833084458515cbd1985a38/An-Evaluation-of-Text-Mining-Techniques-in-Sampling-of-Network-Ports-from-IBR-Traffic.pdf
- Description: Information retrieval (IR) has had techniques that have been used to gauge the extent to which certain keywords can be retrieved from a document. These techniques have been used to measure similarities in duplicated images, native language identification, optimize algorithms, among others. With this notion, this study proposes the use of four of the Information Retrieval Techniques (IRT/IR) to gauge the implications of sampling a/24 IPv4 ports into smaller subnet equivalents. Using IR, this paper shows how the ports found in a/24 IPv4 net-block relate to those found in the smaller subnet equivalents. Using Internet Background Radiation (IBR) data that was collected from Rhodes University, the study found compelling evidence of the viability of using such techniques in sampling datasets. Essentially, being able to identify the variation that comes with sampling the baseline dataset. It shows how the various samples are similar to the baseline dataset. The correlation observed in the scores proves how viable these techniques are to quantifying variations in the sampling of IBR data. In this way, one can identify which subnet equivalent best represents the unique ports found in the baseline dataset (IPv4 net-block dataset).
- Full Text:
- Date Issued: 2019
Quantifying the accuracy of small subnet-equivalent sampling of IPv4 internet background radiation datasets
- Chindipha, Stones D, Irwin, Barry V W, Herbert, Alan
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430271 , vital:72679 , https://doi.org/10.1145/3351108.3351129
- Description: Network telescopes have been used for over a decade to aid in identifying threats by gathering unsolicited network traffic. This Internet Background Radiation (IBR) data has proved to be a significant source of intelligence in combating emerging threats on the Internet at large. Traditionally, operation has required a significant contiguous block of IP addresses. Continued operation of such sensors by researchers and adoption by organisations as part of its operation intelligence is becoming a challenge due to the global shortage of IPv4 addresses. The pressure is on to use allocated IP addresses for operational purposes. Future use of IBR collection methods is likely to be limited to smaller IP address pools, which may not be contiguous. This paper offers a first step towards evaluating the feasibility of such small sensors. An evaluation is conducted of the random sampling of various subnet sized equivalents. The accuracy of observable data is compared against a traditional 'small' IPv4 network telescope using a /24 net-block. Results show that for much of the IBR data, sensors consisting of smaller, non-contiguous blocks of addresses are able to achieve high accuracy rates vs. the base case. While the results obtained given the current nature of IBR, it proves the viability for organisations to utilise free IP addresses within their networks for IBR collection and ultimately the production of Threat intelligence.
- Full Text:
- Date Issued: 2019
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430271 , vital:72679 , https://doi.org/10.1145/3351108.3351129
- Description: Network telescopes have been used for over a decade to aid in identifying threats by gathering unsolicited network traffic. This Internet Background Radiation (IBR) data has proved to be a significant source of intelligence in combating emerging threats on the Internet at large. Traditionally, operation has required a significant contiguous block of IP addresses. Continued operation of such sensors by researchers and adoption by organisations as part of its operation intelligence is becoming a challenge due to the global shortage of IPv4 addresses. The pressure is on to use allocated IP addresses for operational purposes. Future use of IBR collection methods is likely to be limited to smaller IP address pools, which may not be contiguous. This paper offers a first step towards evaluating the feasibility of such small sensors. An evaluation is conducted of the random sampling of various subnet sized equivalents. The accuracy of observable data is compared against a traditional 'small' IPv4 network telescope using a /24 net-block. Results show that for much of the IBR data, sensors consisting of smaller, non-contiguous blocks of addresses are able to achieve high accuracy rates vs. the base case. While the results obtained given the current nature of IBR, it proves the viability for organisations to utilise free IP addresses within their networks for IBR collection and ultimately the production of Threat intelligence.
- Full Text:
- Date Issued: 2019
Effectiveness of Sampling a Small Sized Network Telescope in Internet Background Radiation Data Collection
- Chindipha, Stones D, Irwin, Barry V W, Herbert, Alan
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2018
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427646 , vital:72453 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327624431_Effectiveness_of_Sampling_a_Small_Sized_Net-work_Telescope_in_Internet_Background_Radiation_Data_Collection/links/5b9a5067299bf14ad4d793a1/Effectiveness-of-Sampling-a-Small-Sized-Network-Telescope-in-Internet-Background-Radiation-Data-Collection.pdf
- Description: What is known today as the modern Internet has long relied on the existence of, and use of, IPv4 addresses. However, due to the rapid growth of the Internet of Things (IoT), and limited address space within IPv4, acquiring large IPv4 subnetworks is becoming increasingly difficult. The exhaustion of the IPv4 address space has made it near impossible for organizations to gain access to large blocks of IP space. This is of great concern particularly in the security space which often relies on acquiring large network blocks for performing a technique called Internet Background Radiation (IBR) monitoring. This technique monitors IPv4 addresses which have no services running on them. In practice, no traffic should ever arrive at such an IPv4 address, and so is marked as an anomaly, and thus recorded and analyzed. This research aims to address the problem brought forth by IPv4 address space exhaustion in relation to IBR monitoring. This study’s intent is to identify the smallest subnet that best represents the attributes found in the/24 IPv4 address. This is done by determining how well a subset of the monitored original subnetwork represents the information gathered by the original subnetwork. Determining the best method of selecting a subset of IPv4 addresses from a subnetwork will enable IBR research to continue in the best way possible in an ever restricting research space.
- Full Text:
- Date Issued: 2018
- Authors: Chindipha, Stones D , Irwin, Barry V W , Herbert, Alan
- Date: 2018
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427646 , vital:72453 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327624431_Effectiveness_of_Sampling_a_Small_Sized_Net-work_Telescope_in_Internet_Background_Radiation_Data_Collection/links/5b9a5067299bf14ad4d793a1/Effectiveness-of-Sampling-a-Small-Sized-Network-Telescope-in-Internet-Background-Radiation-Data-Collection.pdf
- Description: What is known today as the modern Internet has long relied on the existence of, and use of, IPv4 addresses. However, due to the rapid growth of the Internet of Things (IoT), and limited address space within IPv4, acquiring large IPv4 subnetworks is becoming increasingly difficult. The exhaustion of the IPv4 address space has made it near impossible for organizations to gain access to large blocks of IP space. This is of great concern particularly in the security space which often relies on acquiring large network blocks for performing a technique called Internet Background Radiation (IBR) monitoring. This technique monitors IPv4 addresses which have no services running on them. In practice, no traffic should ever arrive at such an IPv4 address, and so is marked as an anomaly, and thus recorded and analyzed. This research aims to address the problem brought forth by IPv4 address space exhaustion in relation to IBR monitoring. This study’s intent is to identify the smallest subnet that best represents the attributes found in the/24 IPv4 address. This is done by determining how well a subset of the monitored original subnetwork represents the information gathered by the original subnetwork. Determining the best method of selecting a subset of IPv4 addresses from a subnetwork will enable IBR research to continue in the best way possible in an ever restricting research space.
- Full Text:
- Date Issued: 2018
Offline-First Design for Fault Tolerant Applications.
- Linklater, Gregory, Marais, Craig, Herbert, Alan, Irwin, Barry V W
- Authors: Linklater, Gregory , Marais, Craig , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427683 , vital:72455 , https://www.researchgate.net/profile/Barry-Irwin/publication/327624337_Offline-First_Design_for_Fault_Tolerant_Applications/links/5b9a50a1458515310584ebbe/Offline-First-Design-for-Fault-Tolerant-Applications.pdf
- Description: Faults are inevitable and frustrating, as we increasingly depend on network access and the chain of services that provides it, we suffer a greater loss in productivity when any of those services fail and service delivery is suspended. This research explores connectivity and infrastructure fault tolerance through offline-first application design using techniques such as CQRS and event sourcing. To apply these techniques, this research details the design, and implementation of LOYALTY TRACKER; an offline-first, PoS system for the Android platform that was built to operate in the context of a small pub where faults are commonplace. The application demonstrates data consistency and integrity and a complete feature set that continues to operate while offline but is limited by scalability. The application successfully achieves it’s goals in the limited capacity for which it was designed.
- Full Text:
- Date Issued: 2018
- Authors: Linklater, Gregory , Marais, Craig , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427683 , vital:72455 , https://www.researchgate.net/profile/Barry-Irwin/publication/327624337_Offline-First_Design_for_Fault_Tolerant_Applications/links/5b9a50a1458515310584ebbe/Offline-First-Design-for-Fault-Tolerant-Applications.pdf
- Description: Faults are inevitable and frustrating, as we increasingly depend on network access and the chain of services that provides it, we suffer a greater loss in productivity when any of those services fail and service delivery is suspended. This research explores connectivity and infrastructure fault tolerance through offline-first application design using techniques such as CQRS and event sourcing. To apply these techniques, this research details the design, and implementation of LOYALTY TRACKER; an offline-first, PoS system for the Android platform that was built to operate in the context of a small pub where faults are commonplace. The application demonstrates data consistency and integrity and a complete feature set that continues to operate while offline but is limited by scalability. The application successfully achieves it’s goals in the limited capacity for which it was designed.
- Full Text:
- Date Issued: 2018
Toward distributed key management for offline authentication
- Linklater, Gregory, Smith, Christian, Herbert, Alan, Irwin, Barry V W
- Authors: Linklater, Gregory , Smith, Christian , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430283 , vital:72680 , https://doi.org/10.1145/3278681.3278683
- Description: Self-sovereign identity promises prospective users greater control, security, privacy, portability and overall greater convenience; however the immaturity of current distributed key management solutions results in general disregard of security advisories in favour of convenience and accessibility. This research proposes the use of intermediate certificates as a distributed key management solution. Intermediate certificates will be shown to allow multiple keys to authenticate to a single self-sovereign identity. Keys may be freely added to an identity without requiring a distributed ledger, any other third-party service or sharing private keys between devices. This research will also show that key rotation is a superior alternative to existing key recovery and escrow systems in helping users recover when their keys are lost or compromised. These features will allow remote credentials to be used to issuer, present and appraise remote attestations, without relying on a constant Internet connection.
- Full Text:
- Date Issued: 2018
- Authors: Linklater, Gregory , Smith, Christian , Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430283 , vital:72680 , https://doi.org/10.1145/3278681.3278683
- Description: Self-sovereign identity promises prospective users greater control, security, privacy, portability and overall greater convenience; however the immaturity of current distributed key management solutions results in general disregard of security advisories in favour of convenience and accessibility. This research proposes the use of intermediate certificates as a distributed key management solution. Intermediate certificates will be shown to allow multiple keys to authenticate to a single self-sovereign identity. Keys may be freely added to an identity without requiring a distributed ledger, any other third-party service or sharing private keys between devices. This research will also show that key rotation is a superior alternative to existing key recovery and escrow systems in helping users recover when their keys are lost or compromised. These features will allow remote credentials to be used to issuer, present and appraise remote attestations, without relying on a constant Internet connection.
- Full Text:
- Date Issued: 2018
Towards Enhanced Threat Intelligence Through NetFlow Distillation
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427699 , vital:72456 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327624198_Towards_Enhanced_Threat_Intelligence_Through_NetFlow_Distillation/links/5b9a501fa6fdcc59bf8ee8ea/Towards-Enhanced-Threat-Intelligence-Through-NetFlow-Distillation.pdf
- Description: Bolvedere is a hardware-accelerated NetFlow analysis platform intended to discern and distribute NetFlow records in a requested format by a user. This functionality removes the need for a user to deal with the NetFlow protocol directly, and also reduces the requirement of CPU resources as data would be passed on to a host in the known requested format.
- Full Text:
- Date Issued: 2018
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2018
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427699 , vital:72456 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327624198_Towards_Enhanced_Threat_Intelligence_Through_NetFlow_Distillation/links/5b9a501fa6fdcc59bf8ee8ea/Towards-Enhanced-Threat-Intelligence-Through-NetFlow-Distillation.pdf
- Description: Bolvedere is a hardware-accelerated NetFlow analysis platform intended to discern and distribute NetFlow records in a requested format by a user. This functionality removes the need for a user to deal with the NetFlow protocol directly, and also reduces the requirement of CPU resources as data would be passed on to a host in the known requested format.
- Full Text:
- Date Issued: 2018
JSON schema for attribute-based access control for network resource security
- Linklater, Gregory, Smith, Christian, Connan, James, Herbert, Alan, Irwin, Barry V W
- Authors: Linklater, Gregory , Smith, Christian , Connan, James , Herbert, Alan , Irwin, Barry V W
- Date: 2017
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428368 , vital:72506 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/9820/Linklater_19660_2017.pdf?sequence=1andisAllowed=y
- Description: Attribute-based Access Control (ABAC) is an access control model where authorization for an action on a resource is determined by evalu-ating attributes of the subject, resource (object) and environment. The attributes are evaluated against boolean rules of varying complexity. ABAC rule languages are often based on serializable object modeling and schema languages as in the case of XACML which is based on XML Schema. XACML is a standard by OASIS, and is the current de facto standard for ABAC. While a JSON profile for XACML exists, it is simply a compatibility layer for using JSON in XACML which caters to the XML object model paradigm, as opposed to the JSON object model paradigm. This research proposes JSON Schema as a modeling lan-guage that caters to the JSON object model paradigm on which to base an ABAC rule language. It continues to demonstrate its viability for the task by comparison against the features provided to XACML by XML Schema.
- Full Text:
- Date Issued: 2017
- Authors: Linklater, Gregory , Smith, Christian , Connan, James , Herbert, Alan , Irwin, Barry V W
- Date: 2017
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428368 , vital:72506 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/9820/Linklater_19660_2017.pdf?sequence=1andisAllowed=y
- Description: Attribute-based Access Control (ABAC) is an access control model where authorization for an action on a resource is determined by evalu-ating attributes of the subject, resource (object) and environment. The attributes are evaluated against boolean rules of varying complexity. ABAC rule languages are often based on serializable object modeling and schema languages as in the case of XACML which is based on XML Schema. XACML is a standard by OASIS, and is the current de facto standard for ABAC. While a JSON profile for XACML exists, it is simply a compatibility layer for using JSON in XACML which caters to the XML object model paradigm, as opposed to the JSON object model paradigm. This research proposes JSON Schema as a modeling lan-guage that caters to the JSON object model paradigm on which to base an ABAC rule language. It continues to demonstrate its viability for the task by comparison against the features provided to XACML by XML Schema.
- Full Text:
- Date Issued: 2017
Weems: An extensible HTTP honeypot
- Pearson, Deon, Irwin, Barry V W, Herbert, Alan
- Authors: Pearson, Deon , Irwin, Barry V W , Herbert, Alan
- Date: 2017
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428396 , vital:72508 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/9691/Pearson_19652_2017.pdf?sequence=1andisAllowed=y
- Description: Malicious entities are constantly trying their luck at exploiting known vulnera-bilities in web services, in an attempt to gain access to resources unauthor-ized access to resources. For this reason security specialists deploy various network defenses with the goal preventing these threats; one such tool used are web based honeypots. Historically a honeypot will be deployed facing the Internet to masquerade as a live system with the intention of attracting at-tackers away from the valuable data. Researchers adapted these honeypots and turned them into a platform to allow for the studying and understanding of web attacks and threats on the Internet. Having the ability to develop a honeypot to replicate a specific service meant researchers can now study the behavior patterns of threats, thus giving a better understanding of how to de-fend against them. This paper discusses a high-level design and implemen-tation of Weems, a low-interaction web based modular HTTP honeypot sys-tem. It also presents results obtained from various deployments over a period of time and what can be interpreted from these results.
- Full Text:
- Date Issued: 2017
- Authors: Pearson, Deon , Irwin, Barry V W , Herbert, Alan
- Date: 2017
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428396 , vital:72508 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/9691/Pearson_19652_2017.pdf?sequence=1andisAllowed=y
- Description: Malicious entities are constantly trying their luck at exploiting known vulnera-bilities in web services, in an attempt to gain access to resources unauthor-ized access to resources. For this reason security specialists deploy various network defenses with the goal preventing these threats; one such tool used are web based honeypots. Historically a honeypot will be deployed facing the Internet to masquerade as a live system with the intention of attracting at-tackers away from the valuable data. Researchers adapted these honeypots and turned them into a platform to allow for the studying and understanding of web attacks and threats on the Internet. Having the ability to develop a honeypot to replicate a specific service meant researchers can now study the behavior patterns of threats, thus giving a better understanding of how to de-fend against them. This paper discusses a high-level design and implemen-tation of Weems, a low-interaction web based modular HTTP honeypot sys-tem. It also presents results obtained from various deployments over a period of time and what can be interpreted from these results.
- Full Text:
- Date Issued: 2017
Adaptable exploit detection through scalable netflow analysis
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429274 , vital:72572 , https://ieeexplore.ieee.org/abstract/document/7802938
- Description: Full packet analysis on firewalls and intrusion detection, although effective, has been found in recent times to be detrimental to the overall performance of networks that receive large volumes of throughput. For this reason partial packet analysis technologies such as the NetFlow protocol have emerged to better mitigate these bottlenecks through log generation. This paper researches the use of log files generated by NetFlow version 9 and IPFIX to identify successful and unsuccessful exploit attacks commonly used by automated systems. These malicious communications include but are not limited to exploits that attack Microsoft RPC, Samba, NTP (Network Time Protocol) and IRC (Internet Relay Chat). These attacks are recreated through existing exploit implementations on Metasploit and through hand-crafted reconstructions of exploits via known documentation of vulnerabilities. These attacks are then monitored through a preconfigured virtual testbed containing gateways and network connections commonly found on the Internet. This common attack identification system is intended for insertion as a parallel module for Bolvedere in order to further the increase the Bolvedere system's attack detection capability.
- Full Text:
- Date Issued: 2016
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429274 , vital:72572 , https://ieeexplore.ieee.org/abstract/document/7802938
- Description: Full packet analysis on firewalls and intrusion detection, although effective, has been found in recent times to be detrimental to the overall performance of networks that receive large volumes of throughput. For this reason partial packet analysis technologies such as the NetFlow protocol have emerged to better mitigate these bottlenecks through log generation. This paper researches the use of log files generated by NetFlow version 9 and IPFIX to identify successful and unsuccessful exploit attacks commonly used by automated systems. These malicious communications include but are not limited to exploits that attack Microsoft RPC, Samba, NTP (Network Time Protocol) and IRC (Internet Relay Chat). These attacks are recreated through existing exploit implementations on Metasploit and through hand-crafted reconstructions of exploits via known documentation of vulnerabilities. These attacks are then monitored through a preconfigured virtual testbed containing gateways and network connections commonly found on the Internet. This common attack identification system is intended for insertion as a parallel module for Bolvedere in order to further the increase the Bolvedere system's attack detection capability.
- Full Text:
- Date Issued: 2016
Improving Fidelity in Internet Simulation through Packet Injection
- Koorn, Craig, Irwin, Barry V W, Herbert, Alan
- Authors: Koorn, Craig , Irwin, Barry V W , Herbert, Alan
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427786 , vital:72462 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622877_Improving_Fidelity_in_Internet_Simulation_through_Packet_Injection/links/5b9a1a47458515310583fd8a/Improving-Fidelity-in-Internet-Simulation-through-Packet-Injection.pdf
- Description: This paper describes the of extension implemented to the NKM Internet simulation system, which allows for the improved of injection of packet traffic at arbitrary nodes, and the replay of previously recorded streams. The latter function allows for the relatively easy implementation of Internet Background Radiation (IBR) within the simulated portion of the Internet. This feature thereby enhances the degree of realism of the simulation, and allows for certain pre-determined traffic, such as scanning activity, to be injected and observed by client systems connected to the simulator.
- Full Text:
- Date Issued: 2016
- Authors: Koorn, Craig , Irwin, Barry V W , Herbert, Alan
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427786 , vital:72462 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622877_Improving_Fidelity_in_Internet_Simulation_through_Packet_Injection/links/5b9a1a47458515310583fd8a/Improving-Fidelity-in-Internet-Simulation-through-Packet-Injection.pdf
- Description: This paper describes the of extension implemented to the NKM Internet simulation system, which allows for the improved of injection of packet traffic at arbitrary nodes, and the replay of previously recorded streams. The latter function allows for the relatively easy implementation of Internet Background Radiation (IBR) within the simulated portion of the Internet. This feature thereby enhances the degree of realism of the simulation, and allows for certain pre-determined traffic, such as scanning activity, to be injected and observed by client systems connected to the simulator.
- Full Text:
- Date Issued: 2016
Towards malicious network activity mitigation through subnet reputation analysis
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427799 , vital:72463 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622788_Towards_Malicious_Network_Activity_Mitigation_through_Subnet_Reputation_Analysis/links/5b9a1a88458515310583fda6/Towards-Malicious-Network-Activity-Mitigation-through-Subnet-Reputation-Analysis.pdf
- Description: Analysis technologies that focus on partial packet rather than full packet analysis have shown promise in detection of malicious activity on net-works. NetFlow is one such emergent protocol that is used to log net-work flows through summarizing key features of them. These logs can then be exported to external NetFlow sinks and proper configuration can see effective bandwidth bottleneck mitigation occurring on net-works. Furthermore, each NetFlow source node is configurable with its own unique ID number. This feature enables a system that knows where a NetFlow source node ID number resides physically to say which network flows are occurring from which physical locations irre-spective of the IP addresses involved in these network flows.
- Full Text:
- Date Issued: 2016
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427799 , vital:72463 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622788_Towards_Malicious_Network_Activity_Mitigation_through_Subnet_Reputation_Analysis/links/5b9a1a88458515310583fda6/Towards-Malicious-Network-Activity-Mitigation-through-Subnet-Reputation-Analysis.pdf
- Description: Analysis technologies that focus on partial packet rather than full packet analysis have shown promise in detection of malicious activity on net-works. NetFlow is one such emergent protocol that is used to log net-work flows through summarizing key features of them. These logs can then be exported to external NetFlow sinks and proper configuration can see effective bandwidth bottleneck mitigation occurring on net-works. Furthermore, each NetFlow source node is configurable with its own unique ID number. This feature enables a system that knows where a NetFlow source node ID number resides physically to say which network flows are occurring from which physical locations irre-spective of the IP addresses involved in these network flows.
- Full Text:
- Date Issued: 2016
DDoS Attack Mitigation Through Control of Inherent Charge Decay of Memory Implementations
- Herbert, Alan, Irwin, Barry V W, van Heerden, Renier P
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
FPGA Based Implementation of a High Performance Scalable NetFlow Filter
- Herbert, Alan, Irwin, Barry V W, Otten, D F, Balmahoon, M R
- Authors: Herbert, Alan , Irwin, Barry V W , Otten, D F , Balmahoon, M R
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427887 , vital:72470 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622948_FPGA_Based_Implementation_of_a_High_Perfor-mance_Scalable_NetFlow_Filter/links/5b9a17a192851c4ba8181ba5/FPGA-Based-Implementation-of-a-High-Performance-Scalable-NetFlow-Filter.pdf
- Description: Full packet analysis on firewalls and intrusion detection, although effec-tive, has been found in recent times to be detrimental to the overall per-formance of networks that receive large volumes of throughput. For this reason partial packet analysis algorithms such as the NetFlow protocol have emerged to better mitigate these bottlenecks. This research delves into implementing a hardware accelerated, scalable, high per-formance system for NetFlow analysis and attack mitigation. Further-more, this implementation takes on attack mitigation through collection and processing of network flows produced at the source, rather than at the site of incident. This research platform manages to scale out its back-end through dis-tributed analysis over multiple hosts using the ZeroMQ toolset. Fur-thermore, ZeroMQ allows for multiple NetFlow data publishers, so that plug-ins can subscribe to the publishers that contain the relevant data to further increase the overall performance of the system. The dedicat-ed custom hardware optimizes the received network flows through cleaning, summarization and re-ordering into an easy to pass form when given to the sequential component of the system; this being the back-end.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , Otten, D F , Balmahoon, M R
- Date: 2015
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/427887 , vital:72470 , https://www.researchgate.net/profile/Barry-Ir-win/publication/327622948_FPGA_Based_Implementation_of_a_High_Perfor-mance_Scalable_NetFlow_Filter/links/5b9a17a192851c4ba8181ba5/FPGA-Based-Implementation-of-a-High-Performance-Scalable-NetFlow-Filter.pdf
- Description: Full packet analysis on firewalls and intrusion detection, although effec-tive, has been found in recent times to be detrimental to the overall per-formance of networks that receive large volumes of throughput. For this reason partial packet analysis algorithms such as the NetFlow protocol have emerged to better mitigate these bottlenecks. This research delves into implementing a hardware accelerated, scalable, high per-formance system for NetFlow analysis and attack mitigation. Further-more, this implementation takes on attack mitigation through collection and processing of network flows produced at the source, rather than at the site of incident. This research platform manages to scale out its back-end through dis-tributed analysis over multiple hosts using the ZeroMQ toolset. Fur-thermore, ZeroMQ allows for multiple NetFlow data publishers, so that plug-ins can subscribe to the publishers that contain the relevant data to further increase the overall performance of the system. The dedicat-ed custom hardware optimizes the received network flows through cleaning, summarization and re-ordering into an easy to pass form when given to the sequential component of the system; this being the back-end.
- Full Text:
- Date Issued: 2015
A kernel-driven framework for high performance internet routing simulation
- Herbert, Alan, Irwin, Barry V W
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
- Authors: Herbert, Alan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429585 , vital:72624 , 10.1109/ISSA.2013.6641048
- Description: The ability to provide the simulation of packets traversing an internet path is an integral part of providing realistic simulations for network training, and cyber defence exercises. This paper builds on previous work, and considers an in-kernel approach to solving the routing simulation problem. The inkernel approach is anticipated to allow the framework to be able to achieve throughput rates of 1GB/s or higher using commodity hardware. Processes that run outside the context of the kernel of most operating system require context switching to access hardware and kernel modules. This leads to considerable delays in the processes, such as network simulators, that frequently access hardware such as hard disk accesses and network packet handling. To mitigate this problem, as experienced with earlier implementations, this research looks towards implementing a kernel module to handle network routing and simulation within a UNIX based system. This would remove delays incurred from context switching and allows for direct access to the hardware components of the host.
- Full Text:
- Date Issued: 2013
Deep Routing Simulation
- Irwin, Barry V W, Herbert, Alan
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W , Herbert, Alan
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430353 , vital:72685 , https://www.academic-bookshop.com/ourshop/prod_2546879-ICIW-2013-8th-International-Conference-on-Information-Warfare-and-Security.html
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported, grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The observed shift in geopolitical origins observed during the evolution of the Conficker worm is also discussed. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2013
- «
- ‹
- 1
- ›
- »