An investigation into XSets of primitive behaviours for emergent behaviour in stigmergic and message passing antlike agents
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
- Authors: Chibaya, Colin
- Date: 2014
- Subjects: Ants -- Behavior -- Computer programs , Insects -- Behavior -- Computer programs , Ant communities -- Behavior , Insect societies
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4698 , http://hdl.handle.net/10962/d1012965
- Description: Ants are fascinating creatures - not so much because they are intelligent on their own, but because as a group they display compelling emergent behaviour (the extent to which one observes features in a swarm which cannot be traced back to the actions of swarm members). What does each swarm member do which allows deliberate engineering of emergent behaviour? We investigate the development of a language for programming swarms of ant agents towards desired emergent behaviour. Five aspects of stigmergic (pheromone sensitive computational devices in which a non-symbolic form of communication that is indirectly mediated via the environment arises) and message passing ant agents (computational devices which rely on implicit communication spaces in which direction vectors are shared one-on-one) are studied. First, we investigate the primitive behaviours which characterize ant agents' discrete actions at individual levels. Ten such primitive behaviours are identified as candidate building blocks of the ant agent language sought. We then study mechanisms in which primitive behaviours are put together into XSets (collection of primitive behaviours, parameter values, and meta information which spells out how and when primitive behaviours are used). Various permutations of XSets are possible which define the search space for best performer XSets for particular tasks. Genetic programming principles are proposed as a search strategy for best performer XSets that would allow particular emergent behaviour to occur. XSets in the search space are evolved over various genetic generations and tested for abilities to allow path finding (as proof of concept). XSets are ranked according to the indices of merit (fitness measures which indicate how well XSets allow particular emergent behaviour to occur) they achieve. Best performer XSets for the path finding task are identifed and reported. We validate the results yield when best performer XSets are used with regard to normality, correlation, similarities in variation, and similarities between mean performances over time. Commonly, the simulation results yield pass most statistical tests. The last aspect we study is the application of best performer XSets to different problem tasks. Five experiments are administered in this regard. The first experiment assesses XSets' abilities to allow multiple targets location (ant agents' abilities to locate continuous regions of targets), and found out that best performer XSets are problem independent. However both categories of XSets are sensitive to changes in agent density. We test the influences of individual primitive behaviours and the effects of the sequences of primitive behaviours to the indices of merit of XSets and found out that most primitive behaviours are indispensable, especially when specific sequences are prescribed. The effects of pheromone dissipation to the indices of merit of stigmergic XSets are also scrutinized. Precisely, dissipation is not causal. Rather, it enhances convergence. Overall, this work successfully identify the discrete primitive behaviours of stigmergic and message passing ant-like devices. It successfully put these primitive behaviours together into XSets which characterize a language for programming ant-like devices towards desired emergent behaviour. This XSets approach is a new ant language representation with which a wider domain of emergent tasks can be resolved.
- Full Text:
- Date Issued: 2014
An investigation of parameter relationships in a high-speed digital multimedia environment
- Authors: Chigwamba, Nyasha
- Date: 2014
- Subjects: Multimedia communications , Digital communications , Local area networks (Computer networks) , Computer network architectures , Computer network protocols , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4725 , http://hdl.handle.net/10962/d1021153
- Description: With the rapid adoption of multimedia network technologies, a number of companies and standards bodies are introducing technologies that enhance user experience in networked multimedia environments. These technologies focus on device discovery, connection management, control, and monitoring. This study focused on control and monitoring. Multimedia networks make it possible for devices that are part of the same network to reside in different physical locations. These devices contain parameters that are used to control particular features, such as speaker volume, bass, amplifier gain, and video resolution. It is often necessary for changes in one parameter to affect other parameters, such as a synchronised change between volume and bass parameters, or collective control of multiple parameters. Thus, relationships are required between the parameters. In addition, some devices contain parameters, such as voltage, temperature, and audio level, that require constant monitoring to enable corrective action when thresholds are exceeded. Therefore, a mechanism for monitoring networked devices is required. This thesis proposes relationships that are essential for the proper functioning of a multimedia network and that should, therefore, be incorporated in standard form into a protocol, such that all devices can depend on them. Implementation mechanisms for these relationships were created. Parameter grouping and monitoring capabilities within mixing console implementations and existing control protocols were reviewed. A number of requirements for parameter grouping and monitoring were derived from this review. These requirements include a formal classification of relationship types, the ability to create relationships between parameters with different underlying value units, the ability to create relationships between parameters residing on different devices on a network, and the use of an event-driven mechanism for parameter monitoring. These requirements were the criteria used to govern the implementation mechanisms that were created as part of this study. Parameter grouping and monitoring mechanisms were implemented for the XFN protocol. The mechanisms implemented fulfil the requirements derived from the review of capabilities of mixing consoles and existing control protocols. The formal classification of relationship types was implemented within XFN parameters using lists that keep track of the relationships between each XFN parameter and other XFN parameters that reside on the same device or on other devices on the network. A common value unit, known as the global unit, was defined for use as the value format within value update messages between XFN parameters that have relationships. Mapping tables were used to translate the global unit values to application-specific (universal) units, such as decibels (dB). A mechanism for bulk parameter retrieval within the XFN protocol was augmented to produce an event-driven mechanism for parameter monitoring. These implementation mechanisms were applied to an XFN-protocol-compliant graphical control application to demonstrate their usage within an end user context. At the time of this study, the XFN protocol was undergoing standardisation within the Audio Engineering Society. The AES-64 standard has now been approved. Most of the implementation mechanisms resulting from this study have been incorporated into this standard.
- Full Text:
- Date Issued: 2014
- Authors: Chigwamba, Nyasha
- Date: 2014
- Subjects: Multimedia communications , Digital communications , Local area networks (Computer networks) , Computer network architectures , Computer network protocols , Computer sound processing , Sound -- Recording and reproducing -- Digital techniques
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4725 , http://hdl.handle.net/10962/d1021153
- Description: With the rapid adoption of multimedia network technologies, a number of companies and standards bodies are introducing technologies that enhance user experience in networked multimedia environments. These technologies focus on device discovery, connection management, control, and monitoring. This study focused on control and monitoring. Multimedia networks make it possible for devices that are part of the same network to reside in different physical locations. These devices contain parameters that are used to control particular features, such as speaker volume, bass, amplifier gain, and video resolution. It is often necessary for changes in one parameter to affect other parameters, such as a synchronised change between volume and bass parameters, or collective control of multiple parameters. Thus, relationships are required between the parameters. In addition, some devices contain parameters, such as voltage, temperature, and audio level, that require constant monitoring to enable corrective action when thresholds are exceeded. Therefore, a mechanism for monitoring networked devices is required. This thesis proposes relationships that are essential for the proper functioning of a multimedia network and that should, therefore, be incorporated in standard form into a protocol, such that all devices can depend on them. Implementation mechanisms for these relationships were created. Parameter grouping and monitoring capabilities within mixing console implementations and existing control protocols were reviewed. A number of requirements for parameter grouping and monitoring were derived from this review. These requirements include a formal classification of relationship types, the ability to create relationships between parameters with different underlying value units, the ability to create relationships between parameters residing on different devices on a network, and the use of an event-driven mechanism for parameter monitoring. These requirements were the criteria used to govern the implementation mechanisms that were created as part of this study. Parameter grouping and monitoring mechanisms were implemented for the XFN protocol. The mechanisms implemented fulfil the requirements derived from the review of capabilities of mixing consoles and existing control protocols. The formal classification of relationship types was implemented within XFN parameters using lists that keep track of the relationships between each XFN parameter and other XFN parameters that reside on the same device or on other devices on the network. A common value unit, known as the global unit, was defined for use as the value format within value update messages between XFN parameters that have relationships. Mapping tables were used to translate the global unit values to application-specific (universal) units, such as decibels (dB). A mechanism for bulk parameter retrieval within the XFN protocol was augmented to produce an event-driven mechanism for parameter monitoring. These implementation mechanisms were applied to an XFN-protocol-compliant graphical control application to demonstrate their usage within an end user context. At the time of this study, the XFN protocol was undergoing standardisation within the Audio Engineering Society. The AES-64 standard has now been approved. Most of the implementation mechanisms resulting from this study have been incorporated into this standard.
- Full Text:
- Date Issued: 2014
An investigation of protocol command translation as a means to enable interoperability between networked audio devices
- Authors: Igumbor, Osedum Peter
- Date: 2014
- Subjects: Streaming audio Data transmission systems Computer network protocols Computer networks -- Management Command languages (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4689 , http://hdl.handle.net/10962/d1011128
- Description: Digital audio networks allow multiple channels of audio to be streamed between devices. This eliminates the need for many different cables to route audio between devices. An added advantage of digital audio networks is the ability to configure and control the networked devices from a common control point. Common control of networked devices enables a sound engineer to establish and destroy audio stream connections between networked devices that are distances apart. On a digital audio network, an audio transport technology enables the exchange of data streams. Typically, an audio transport technology is capable of transporting both control messages and audio data streams. There exist a number of audio transport technologies. Some of these technologies implement data transport by exchanging OSI/ISO layer 2 data frames, while others transport data within OSI/ISO layer 3 packets. There are some approaches to achieving interoperability between devices that utilize different audio transport technologies. A digital audio device typically implements an audio control protocol, which enables it process configuration and control messages from a remote controller. An audio control protocol also defines the structure of the messages that are exchanged between compliant devices. There are currently a wide range of audio control protocols. Some audio control protocols utilize layer 3 audio transport technology, while others utilize layer 2 audio transport technology. An audio device can only communicate with other devices that implement the same control protocol, irrespective of a common transport technology that connects the devices. The existence of different audio control protocols among devices on a network results in a situation where the devices are unable to communicate with each other. Furthermore, a single control application is unable to establish or destroy audio stream connections between the networked devices, since they implement different control protocols. When an audio engineer is designing an audio network installation, this interoperability challenge restricts the choice of devices that can be included. Even when audio transport interoperability has been achieved, common control of the devices remains a challenge. This research investigates protocol command translation as a means to enable interoperability between networked audio devices that implement different audio control protocols. It proposes the use of a command translator that is capable of receiving messages conforming to one protocol from any of the networked devices, translating the received message to conform to a different control protocol, then transmitting the translated message to the intended target which understands the translated protocol message. In so doing, the command translator enables common control of the networked devices, since a control application is able to configure and control devices that conform to different protocols by utilizing the command translator to perform appropriate protocol translation.
- Full Text:
- Date Issued: 2014
- Authors: Igumbor, Osedum Peter
- Date: 2014
- Subjects: Streaming audio Data transmission systems Computer network protocols Computer networks -- Management Command languages (Computer science)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4689 , http://hdl.handle.net/10962/d1011128
- Description: Digital audio networks allow multiple channels of audio to be streamed between devices. This eliminates the need for many different cables to route audio between devices. An added advantage of digital audio networks is the ability to configure and control the networked devices from a common control point. Common control of networked devices enables a sound engineer to establish and destroy audio stream connections between networked devices that are distances apart. On a digital audio network, an audio transport technology enables the exchange of data streams. Typically, an audio transport technology is capable of transporting both control messages and audio data streams. There exist a number of audio transport technologies. Some of these technologies implement data transport by exchanging OSI/ISO layer 2 data frames, while others transport data within OSI/ISO layer 3 packets. There are some approaches to achieving interoperability between devices that utilize different audio transport technologies. A digital audio device typically implements an audio control protocol, which enables it process configuration and control messages from a remote controller. An audio control protocol also defines the structure of the messages that are exchanged between compliant devices. There are currently a wide range of audio control protocols. Some audio control protocols utilize layer 3 audio transport technology, while others utilize layer 2 audio transport technology. An audio device can only communicate with other devices that implement the same control protocol, irrespective of a common transport technology that connects the devices. The existence of different audio control protocols among devices on a network results in a situation where the devices are unable to communicate with each other. Furthermore, a single control application is unable to establish or destroy audio stream connections between the networked devices, since they implement different control protocols. When an audio engineer is designing an audio network installation, this interoperability challenge restricts the choice of devices that can be included. Even when audio transport interoperability has been achieved, common control of the devices remains a challenge. This research investigates protocol command translation as a means to enable interoperability between networked audio devices that implement different audio control protocols. It proposes the use of a command translator that is capable of receiving messages conforming to one protocol from any of the networked devices, translating the received message to conform to a different control protocol, then transmitting the translated message to the intended target which understands the translated protocol message. In so doing, the command translator enables common control of the networked devices, since a control application is able to configure and control devices that conform to different protocols by utilizing the command translator to perform appropriate protocol translation.
- Full Text:
- Date Issued: 2014
An investigation of the XMOS XSl architecture as a platform for development of audio control standards
- Authors: Dibley, James
- Date: 2014
- Subjects: Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4694 , http://hdl.handle.net/10962/d1011789 , Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Description: This thesis investigates the feasiblity of using a new microcontroller architecture, the XMOS XS1, in the research and development of control standards for audio distribution networks. This investigation is conducted in the context of an emerging audio distribution network standard, Ethernet Audio/Video Bridging (`Ethernet AVB'), and an emerging audio control standard, AES-64. The thesis describes these emerging standards, the XMOS XS1 architecture (including its associated programming language, XC), and the open-source implementation of an Ethernet AVB streaming audio device based on the XMOS XS1 architecture. It is shown how the XMOS XS1 architecture and its associated features, focusing on the XC language's mechanisms for concurrency, event-driven programming, and integration of C software modules, enable a powerful implementation of the AES-64 control standard. Feasibility is demonstrated by the implementation of an AES-64 protocol stack and its integration into an XMOS XS1-based Ethernet AVB streaming audio device, providing control of Ethernet AVB features and audio hardware, as well as implementations of advanced AES-64 control mechanisms. It is demonstrated that the XMOS XS1 architecture is a compelling platform for the development of audio control standards, and has enabled the implementation of AES-64 connection management and control over standards-compliant Ethernet AVB streaming audio devices where no such implementation previously existed. The research additionally describes a linear design method for applications based on the XMOS XS1 architecture, and provides a baseline implementation reference for the AES-64 control standard where none previously existed.
- Full Text:
- Date Issued: 2014
- Authors: Dibley, James
- Date: 2014
- Subjects: Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4694 , http://hdl.handle.net/10962/d1011789 , Microcontrollers -- Research , Streaming audio -- Standards -- Research , Computer sound processing -- Research , Computer network protocols -- Standards -- Research , Communication -- Technological innovations -- Research
- Description: This thesis investigates the feasiblity of using a new microcontroller architecture, the XMOS XS1, in the research and development of control standards for audio distribution networks. This investigation is conducted in the context of an emerging audio distribution network standard, Ethernet Audio/Video Bridging (`Ethernet AVB'), and an emerging audio control standard, AES-64. The thesis describes these emerging standards, the XMOS XS1 architecture (including its associated programming language, XC), and the open-source implementation of an Ethernet AVB streaming audio device based on the XMOS XS1 architecture. It is shown how the XMOS XS1 architecture and its associated features, focusing on the XC language's mechanisms for concurrency, event-driven programming, and integration of C software modules, enable a powerful implementation of the AES-64 control standard. Feasibility is demonstrated by the implementation of an AES-64 protocol stack and its integration into an XMOS XS1-based Ethernet AVB streaming audio device, providing control of Ethernet AVB features and audio hardware, as well as implementations of advanced AES-64 control mechanisms. It is demonstrated that the XMOS XS1 architecture is a compelling platform for the development of audio control standards, and has enabled the implementation of AES-64 connection management and control over standards-compliant Ethernet AVB streaming audio devices where no such implementation previously existed. The research additionally describes a linear design method for applications based on the XMOS XS1 architecture, and provides a baseline implementation reference for the AES-64 control standard where none previously existed.
- Full Text:
- Date Issued: 2014
Classification of the difficulty in accelerating problems using GPUs
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
- Authors: Tristram, Uvedale Roy
- Date: 2014
- Subjects: Graphics processing units , Computer algorithms , Computer programming , Problem solving -- Data processing
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4699 , http://hdl.handle.net/10962/d1012978
- Description: Scientists continually require additional processing power, as this enables them to compute larger problem sizes, use more complex models and algorithms, and solve problems previously thought computationally impractical. General-purpose computation on graphics processing units (GPGPU) can help in this regard, as there is great potential in using graphics processors to accelerate many scientific models and algorithms. However, some problems are considerably harder to accelerate than others, and it may be challenging for those new to GPGPU to ascertain the difficulty of accelerating a particular problem or seek appropriate optimisation guidance. Through what was learned in the acceleration of a hydrological uncertainty ensemble model, large numbers of k-difference string comparisons, and a radix sort, problem attributes have been identified that can assist in the evaluation of the difficulty in accelerating a problem using GPUs. The identified attributes are inherent parallelism, branch divergence, problem size, required computational parallelism, memory access pattern regularity, data transfer overhead, and thread cooperation. Using these attributes as difficulty indicators, an initial problem difficulty classification framework has been created that aids in GPU acceleration difficulty evaluation. This framework further facilitates directed guidance on suggested optimisations and required knowledge based on problem classification, which has been demonstrated for the aforementioned accelerated problems. It is anticipated that this framework, or a derivative thereof, will prove to be a useful resource for new or novice GPGPU developers in the evaluation of potential problems for GPU acceleration.
- Full Text:
- Date Issued: 2014
Cloud information security : a higher education perspective
- Authors: Van der Schyff, Karl Izak
- Date: 2014
- Subjects: Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4692 , http://hdl.handle.net/10962/d1011607 , Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Description: In recent years higher education institutions have come under increasing financial pressure. This has not only prompted universities to investigate more cost effective means of delivering course content and maintaining research output, but also to investigate the administrative functions that accompany them. As such, many South African universities have either adopted or are in the process of adopting some form of cloud computing given the recent drop in bandwidth costs. However, this adoption process has raised concerns about the security of cloud-based information and this has, in some cases, had a negative impact on the adoption process. In an effort to study these concerns many researchers have employed a positivist approach with little, if any, focus on the operational context of these universities. Moreover, there has been very little research, specifically within the South African context. This study addresses some of these concerns by investigating the threats and security incident response life cycle within a higher education cloud. This was done by initially conducting a small scale survey and a detailed thematic analysis of twelve interviews from three South African universities. The identified themes and their corresponding analyses and interpretation contribute on both a practical and theoretical level with the practical contributions relating to a set of security driven criteria for selecting cloud providers as well as recommendations for universities who have or are in the process of adopting cloud computing. Theoretically several conceptual frameworks are offered allowing the researcher to convey his understanding of how the aforementioned practical concepts relate to each other as well as the concepts that constitute the research questions of this study.
- Full Text:
- Date Issued: 2014
- Authors: Van der Schyff, Karl Izak
- Date: 2014
- Subjects: Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4692 , http://hdl.handle.net/10962/d1011607 , Cloud computing -- Security measures , Information technology -- Security measures , Data protection , Internet in higher education , Education, Higher -- Technological innovations
- Description: In recent years higher education institutions have come under increasing financial pressure. This has not only prompted universities to investigate more cost effective means of delivering course content and maintaining research output, but also to investigate the administrative functions that accompany them. As such, many South African universities have either adopted or are in the process of adopting some form of cloud computing given the recent drop in bandwidth costs. However, this adoption process has raised concerns about the security of cloud-based information and this has, in some cases, had a negative impact on the adoption process. In an effort to study these concerns many researchers have employed a positivist approach with little, if any, focus on the operational context of these universities. Moreover, there has been very little research, specifically within the South African context. This study addresses some of these concerns by investigating the threats and security incident response life cycle within a higher education cloud. This was done by initially conducting a small scale survey and a detailed thematic analysis of twelve interviews from three South African universities. The identified themes and their corresponding analyses and interpretation contribute on both a practical and theoretical level with the practical contributions relating to a set of security driven criteria for selecting cloud providers as well as recommendations for universities who have or are in the process of adopting cloud computing. Theoretically several conceptual frameworks are offered allowing the researcher to convey his understanding of how the aforementioned practical concepts relate to each other as well as the concepts that constitute the research questions of this study.
- Full Text:
- Date Issued: 2014
Correlation and comparative analysis of traffic across five network telescopes
- Nkhumeleni, Thizwilondi Moses
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
- Authors: Nkhumeleni, Thizwilondi Moses
- Date: 2014
- Subjects: Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4693 , http://hdl.handle.net/10962/d1011668 , Sensor networks , Computer networks , TCP/IP (Computer network protocol) , Computer networks -- Management , Electronic data processing -- Management
- Description: Monitoring unused IP address space by using network telescopes provides a favourable environment for researchers to study and detect malware, worms, denial of service and scanning activities. Research in the field of network telescopes has progressed over the past decade resulting in the development of an increased number of overlapping datasets. Rhodes University's network of telescope sensors has continued to grow with additional network telescopes being brought online. At the time of writing, Rhodes University has a distributed network of five relatively small /24 network telescopes. With five network telescope sensors, this research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. To aid summarisation and visualisation techniques, time series' representing time-based traffic activity, are constructed. By employing an iterative experimental process of captured traffic, two natural categories of the five network telescopes are presented. Using the cross- and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes' datasets. Results were significantly improved by studying TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analysed the correlation of network telescopes' traffic activity.
- Full Text:
- Date Issued: 2014
Data-centric security : towards a utopian model for protecting corporate data on mobile devices
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
- Authors: Mayisela, Simphiwe Hector
- Date: 2014
- Subjects: Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4688 , http://hdl.handle.net/10962/d1011094 , Computer security , Computer networks -- Security measures , Business enterprises -- Computer networks -- Security measures , Mobile computing -- Security measures , Mobile communication systems -- Security measures
- Description: Data-centric security is significant in understanding, assessing and mitigating the various risks and impacts of sharing information outside corporate boundaries. Information generally leaves corporate boundaries through mobile devices. Mobile devices continue to evolve as multi-functional tools for everyday life, surpassing their initial intended use. This added capability and increasingly extensive use of mobile devices does not come without a degree of risk - hence the need to guard and protect information as it exists beyond the corporate boundaries and throughout its lifecycle. Literature on existing models crafted to protect data, rather than infrastructure in which the data resides, is reviewed. Technologies that organisations have implemented to adopt the data-centric model are studied. A utopian model that takes into account the shortcomings of existing technologies and deficiencies of common theories is proposed. Two sets of qualitative studies are reported; the first is a preliminary online survey to assess the ubiquity of mobile devices and extent of technology adoption towards implementation of data-centric model; and the second comprises of a focus survey and expert interviews pertaining on technologies that organisations have implemented to adopt the data-centric model. The latter study revealed insufficient data at the time of writing for the results to be statistically significant; however; indicative trends supported the assertions documented in the literature review. The question that this research answers is whether or not current technology implementations designed to mitigate risks from mobile devices, actually address business requirements. This research question, answered through these two sets qualitative studies, discovered inconsistencies between the technology implementations and business requirements. The thesis concludes by proposing a realistic model, based on the outcome of the qualitative study, which bridges the gap between the technology implementations and business requirements. Future work which could perhaps be conducted in light of the findings and the comments from this research is also considered.
- Full Text:
- Date Issued: 2014
DNS traffic based classifiers for the automatic classification of botnet domains
- Authors: Stalmans, Etienne Raymond
- Date: 2014
- Subjects: Denial of service attacks -- Research , Computer security -- Research , Internet -- Security measures -- Research , Malware (Computer software) , Spam (Electronic mail) , Phishing , Command and control systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4684 , http://hdl.handle.net/10962/d1007739
- Description: Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
- Full Text:
- Date Issued: 2014
- Authors: Stalmans, Etienne Raymond
- Date: 2014
- Subjects: Denial of service attacks -- Research , Computer security -- Research , Internet -- Security measures -- Research , Malware (Computer software) , Spam (Electronic mail) , Phishing , Command and control systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4684 , http://hdl.handle.net/10962/d1007739
- Description: Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
- Full Text:
- Date Issued: 2014
The role of computational thinking in introductory computer science
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
- Authors: Gouws, Lindsey Ann
- Date: 2014
- Subjects: Computer science , Computational complexity , Problem solving -- Study and teaching
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4690 , http://hdl.handle.net/10962/d1011152 , Computer science , Computational complexity , Problem solving -- Study and teaching
- Description: Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
- Full Text:
- Date Issued: 2014
Web-based M-learning system for ad-hoc learning of mathematical concepts amongst first year students at the University of Namibia
- Authors: Ntinda, Maria Ndapewa
- Date: 2014
- Subjects: Mathematics -- Study and teaching (Higher) -- Namibia , Mathematics -- Technological innovations , Mobile communication systems in education , Teaching -- Aids and devices , Educational innovations , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4701 , http://hdl.handle.net/10962/d1013174
- Description: In the last decade, there has been an increase in the number of web-enabled mobile devices, offering a new platform that can be targeted for the development of learning applications. Worldwide, developers have taken initiatives in developing mobile learning (M-learning) systems to provide students with access to learning materials regardless of time and location. The purpose of this study was to investigate whether it is viable for first year students enrolled at the University of Namibia (UNAM) to use mobile phones for ad-hoc learning of mathematical concepts. A system, EnjoyMath, aiming to assist students in preparing for tests, examinations, review contents and reinforce knowledge acquired during traditional classroom interactions was designed and implemented. The EnjoyMath system was designed and implemented through the use of the Human Centred Design (HCD) methodology. Two revolutions of the four-step process of the HCD cycle were completed in this study. Due to the distance between UNAM and Rhodes University (where the researcher was based), the researcher could not always work in close relation with the UNAM students. Students from the Extended Study Unit (ESU) at Rhodes University were therefore selected in the first iteration of the project due to their proximity to the researcher and their similar demographics to the first year UNAM students, while the UNAM students were targeted in the second iteration of the study. This thesis presents the outcome of the two pre-intervention studies of the first-year students' perceptions about M-learning conducted at Rhodes University and UNAM. The results of the pre-intervention studies showed that the students are enthusiastic about using an M-learning system, because it would allow them to put in more time to practice their skills whenever and wherever they are. Moreover, the thesis presents the different stages undertaken to develop the EnjoyMath system using Open Source Software (PHP and MySQL). The results of a user study (post-intervention) conducted with participants at UNAM, ascertained the participants' perception of the usability of the EnjoyMath system and are also presented in this thesis. The EnjoyMath system was perceived by the participants to be "passable"; hence an M-learning system could be used to compliment an E-learning system at UNAM.
- Full Text:
- Date Issued: 2014
- Authors: Ntinda, Maria Ndapewa
- Date: 2014
- Subjects: Mathematics -- Study and teaching (Higher) -- Namibia , Mathematics -- Technological innovations , Mobile communication systems in education , Teaching -- Aids and devices , Educational innovations , Open source software
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4701 , http://hdl.handle.net/10962/d1013174
- Description: In the last decade, there has been an increase in the number of web-enabled mobile devices, offering a new platform that can be targeted for the development of learning applications. Worldwide, developers have taken initiatives in developing mobile learning (M-learning) systems to provide students with access to learning materials regardless of time and location. The purpose of this study was to investigate whether it is viable for first year students enrolled at the University of Namibia (UNAM) to use mobile phones for ad-hoc learning of mathematical concepts. A system, EnjoyMath, aiming to assist students in preparing for tests, examinations, review contents and reinforce knowledge acquired during traditional classroom interactions was designed and implemented. The EnjoyMath system was designed and implemented through the use of the Human Centred Design (HCD) methodology. Two revolutions of the four-step process of the HCD cycle were completed in this study. Due to the distance between UNAM and Rhodes University (where the researcher was based), the researcher could not always work in close relation with the UNAM students. Students from the Extended Study Unit (ESU) at Rhodes University were therefore selected in the first iteration of the project due to their proximity to the researcher and their similar demographics to the first year UNAM students, while the UNAM students were targeted in the second iteration of the study. This thesis presents the outcome of the two pre-intervention studies of the first-year students' perceptions about M-learning conducted at Rhodes University and UNAM. The results of the pre-intervention studies showed that the students are enthusiastic about using an M-learning system, because it would allow them to put in more time to practice their skills whenever and wherever they are. Moreover, the thesis presents the different stages undertaken to develop the EnjoyMath system using Open Source Software (PHP and MySQL). The results of a user study (post-intervention) conducted with participants at UNAM, ascertained the participants' perception of the usability of the EnjoyMath system and are also presented in this thesis. The EnjoyMath system was perceived by the participants to be "passable"; hence an M-learning system could be used to compliment an E-learning system at UNAM.
- Full Text:
- Date Issued: 2014
Extensibility in ORDBMS databases : an exploration of the data cartridge mechanism in Oracle9i
- Ndakunda, Tulimevava Kaunapawa
- Authors: Ndakunda, Tulimevava Kaunapawa
- Date: 2013-06-18
- Subjects: Database management , Oracle (Computer file)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4686 , http://hdl.handle.net/10962/d1008098 , Database management , Oracle (Computer file)
- Description: To support current and emerging database applications, Object-Relational Database Management Systems (ORDBMS) provide mechanisms to extend the data storage capabilities and the functionality of the database with application-specific types and methods. Using these mechanisms, the database may contain user-defined data types, large objects (LOBs), external procedures, extensible indexing, query optimisation techniques and other features that are treated in the same way as built-in database features . The many extensibility options provided by the ORDBMS, however, raise several implementation challenges that are not always obvious. This thesis examines a few of the key challenges that arise when extending Oracle database with new functionality. To realise the potential of extensibility in Oracle, the thesis used the problem area of image retrieval as the main test domain. Current research efforts in image retrieval are lagging behind the required retrieval, but are continuously improving. As better retrieval techniques become available, it is important that they are integrated into the available database systems to facilitate improved retrieval. The thesis also reports on the practical experiences gained from integrating an extensible indexing scenario. Sample scenarios are integrated in Oracle9i database using the data cartridge mechanism, which allows Oracle database functionality to be extended with new functional components. The integration demonstrates how additional functionality may be effectively applied to both general and specialised domains in the database. It also reveals alternative design options that allow data cartridge developers, most of who are not database server experts, to extend the database. The thesis is concluded with some of the key observations and options that designers must consider when extending the database with new functionality. The main challenges for developers are the learning curve required to understand the data cartridge framework and the ability to adapt already developed code within the constraints of the data cartridge using the provided extensibility APls. Maximum reusability relies on making good choices for the basic functions, out of which specialised functions can be built. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Authors: Ndakunda, Tulimevava Kaunapawa
- Date: 2013-06-18
- Subjects: Database management , Oracle (Computer file)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4686 , http://hdl.handle.net/10962/d1008098 , Database management , Oracle (Computer file)
- Description: To support current and emerging database applications, Object-Relational Database Management Systems (ORDBMS) provide mechanisms to extend the data storage capabilities and the functionality of the database with application-specific types and methods. Using these mechanisms, the database may contain user-defined data types, large objects (LOBs), external procedures, extensible indexing, query optimisation techniques and other features that are treated in the same way as built-in database features . The many extensibility options provided by the ORDBMS, however, raise several implementation challenges that are not always obvious. This thesis examines a few of the key challenges that arise when extending Oracle database with new functionality. To realise the potential of extensibility in Oracle, the thesis used the problem area of image retrieval as the main test domain. Current research efforts in image retrieval are lagging behind the required retrieval, but are continuously improving. As better retrieval techniques become available, it is important that they are integrated into the available database systems to facilitate improved retrieval. The thesis also reports on the practical experiences gained from integrating an extensible indexing scenario. Sample scenarios are integrated in Oracle9i database using the data cartridge mechanism, which allows Oracle database functionality to be extended with new functional components. The integration demonstrates how additional functionality may be effectively applied to both general and specialised domains in the database. It also reveals alternative design options that allow data cartridge developers, most of who are not database server experts, to extend the database. The thesis is concluded with some of the key observations and options that designers must consider when extending the database with new functionality. The main challenges for developers are the learning curve required to understand the data cartridge framework and the ability to adapt already developed code within the constraints of the data cartridge using the provided extensibility APls. Maximum reusability relies on making good choices for the basic functions, out of which specialised functions can be built. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
Service provisioning in two open-source SIP implementation, cinema and vocal
- Authors: Hsieh, Ming Chih
- Date: 2013-06-18
- Subjects: Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4687 , http://hdl.handle.net/10962/d1008195 , Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Description: The distribution of real-time multimedia streams is seen nowadays as the next step forward for the Internet. One of the most obvious uses of such streams is to support telephony over the Internet, replacing and improving traditional telephony. This thesis investigates the development and deployment of services in two Internet telephony environments, namely CINEMA (Columbia InterNet Extensible Multimedia Architecture) and VOCAL (Vovida Open Communication Application Library), both based on the Session Initiation Protocol (SIP) and open-sourced. A classification of services is proposed, which divides services into two large groups: basic and advanced services. Basic services are services such as making point-to-point calls, registering with the server and making calls via the server. Any other service is considered an advanced service. Advanced services are defined by four categories: Call Related, Interactive, Internetworking and Hybrid. New services were implemented for the Call Related, Interactive and Internetworking categories. First, features involving call blocking, call screening and missed calls were implemented in the two environments in order to investigate Call-related services. Next, a notification feature was implemented in both environments in order to investigate Interactive services. Finally, a translator between MGCP and SIP was developed to investigate an Internetworking service in the VOCAL environment. The practical implementation of the new features just described was used to answer questions about the location of the services, as well as the level of required expertise and the ease or difficulty experienced in creating services in each of the two environments. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Authors: Hsieh, Ming Chih
- Date: 2013-06-18
- Subjects: Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4687 , http://hdl.handle.net/10962/d1008195 , Real-time data processing , Computer network protocols , Internet telephony , Digital telephone systems , Communication -- Technological innovations
- Description: The distribution of real-time multimedia streams is seen nowadays as the next step forward for the Internet. One of the most obvious uses of such streams is to support telephony over the Internet, replacing and improving traditional telephony. This thesis investigates the development and deployment of services in two Internet telephony environments, namely CINEMA (Columbia InterNet Extensible Multimedia Architecture) and VOCAL (Vovida Open Communication Application Library), both based on the Session Initiation Protocol (SIP) and open-sourced. A classification of services is proposed, which divides services into two large groups: basic and advanced services. Basic services are services such as making point-to-point calls, registering with the server and making calls via the server. Any other service is considered an advanced service. Advanced services are defined by four categories: Call Related, Interactive, Internetworking and Hybrid. New services were implemented for the Call Related, Interactive and Internetworking categories. First, features involving call blocking, call screening and missed calls were implemented in the two environments in order to investigate Call-related services. Next, a notification feature was implemented in both environments in order to investigate Interactive services. Finally, a translator between MGCP and SIP was developed to investigate an Internetworking service in the VOCAL environment. The practical implementation of the new features just described was used to answer questions about the location of the services, as well as the level of required expertise and the ease or difficulty experienced in creating services in each of the two environments. , KMBT_363 , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
A mobile toolkit and customised location server for the creation of cross-referencing location-based services
- Ndakunda, Shange-Ishiwa Tangeni
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
An exploratory study of techniques in passive network telescope data analysis
- Authors: Cowie, Bradley
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Computer networks -- Monitoring Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4573 , http://hdl.handle.net/10962/d1002038
- Description: Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
- Full Text:
- Date Issued: 2013
- Authors: Cowie, Bradley
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Computer networks -- Monitoring Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4573 , http://hdl.handle.net/10962/d1002038
- Description: Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
- Full Text:
- Date Issued: 2013
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
Deploying DNSSEC in islands of security
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
Information technology audits in South African higher education institutions
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
Log analysis aided by latent semantic mapping
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
Search engine poisoning and its prevalence in modern search engines
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013