An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Authors: Miles, Shaun Graeme
- Date: 2012-06-20
- Subjects: Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4656 , http://hdl.handle.net/10962/d1006653 , Electronic data processing departments -- Security measures , Electronic data processing departments , Privacy, Right of , Computer security , Data protection , Computers -- Access control
- Description: This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model. , Adobe Acrobat Pro 9.5.1 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
A structural and functional specification of a SCIM for service interaction management and personalisation in the IMS
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
- Authors: Tsietsi, Mosiuoa Jeremia
- Date: 2012
- Subjects: Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4606 , http://hdl.handle.net/10962/d1004864 , Internet Protocol multimedia subsystem , Internet Protocol multimedia subsystem -- Specifications , Long-Term Evolution (Telecommunications) , European Telecommunications Standards Institute , Wireless communication systems , Multimedia communications
- Description: The Internet Protocol Multimedia Subsystem (IMS) is a component of the 3G mobile network that has been specified by standards development organisations such as the 3GPP (3rd Generation Partnership Project) and ETSI (European Telecommunication Standards Institute). IMS seeks to guarantee that the telecommunication network of the future provides subscribers with seamless access to services across disparate networks. In order to achieve this, it defines a service architecture that hosts application servers that provide subscribers with value added services. Typically, an application server bundles all the functionality it needs to execute the services it delivers, however this view is currently being challenged. It is now thought that services should be synthesised from simple building blocks called service capabilities. This decomposition would facilitate the re-use of service capabilities across multiple services and would support the creation of new services that could not have originally been conceived. The shift from monolithic services to those built from service capabilities poses a challenge to the current service model in IMS. To accommodate this, the 3GPP has defined an entity known as a service capability interaction manager (SCIM) that would be responsible for managing the interactions between service capabilities in order to realise complex services. Some of these interactions could potentially lead to undesirable results, which the SCIM must work to avoid. As an added requirement, it is believed that the network should allow policies to be applied to network services which the SCIM should be responsible for enforcing. At the time of writing, the functional and structural architecture of the SCIM has not yet been standardised. This thesis explores the current serv ice architecture of the IMS in detail. Proposals that address the structure and functions of the SCIM are carefully compared and contrasted. This investigation leads to the presentation of key aspects of the SCIM, and provides solutions that explain how it should interact with service capabilities, manage undesirable interactions and factor user and network operator policies into its execution model. A modified design of the IMS service layer that embeds the SCIM is subsequently presented and described. The design uses existing IMS protocols and requires no change in the behaviour of the standard IMS entities. In order to develop a testbed for experimental verification of the design, the identification of suitable software platforms was required. This thesis presents some of the most popular platforms currently used by developers such as the Open IMS Core and OpenSER, as well as an open source, Java-based, multimedia communication platform called Mobicents. As a precursor to the development of the SCIM, a converged multimedia service is presented that describes how a video streaming application that is leveraged by a web portal was implemented for an IMS testbed using Mobicents components. The Mobicents SIP Servlets container was subsequently used to model an initial prototype of the SCIM, using a mUlti-component telephony service to illustrate the proposed service execution model. The design focuses on SIP-based services only, but should also work for other types of IMS application servers as well.
- Full Text:
- Date Issued: 2012
An investigation into information security practices implemented by Research and Educational Network of Uganda (RENU) member institution
- Authors: Kisakye, Alex
- Date: 2012 , 2012-11-06
- Subjects: Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4586 , http://hdl.handle.net/10962/d1004748 , Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Description: Educational institutions are known to be at the heart of complex computing systems in any region in which they exist, especially in Africa. The existence of high end computing power, often connected to the Internet and to research network grids, makes educational institutions soft targets for attackers. Attackers of such networks are normally either looking to exploit the large computing resources available for use in secondary attacks or to steal Intellectual Property (IP) from the research networks to which the institutions belong. Universities also store a lot of information about their current students and staff population as well as alumni ranging from personal to financial information. Unauthorized access to such information violates statutory requirement of the law and could grossly tarnish the institutions name not to mention cost the institution a lot of money during post-incident activities. The purpose of this study was to investigate the information security practices that have been put in place by Research and Education Network of Uganda (RENU) member institutions to safeguard institutional data and systems from both internal and external security threats. The study was conducted on six member institutions in three phases, between the months of May and July 2011 in Uganda. Phase One involved the use of a customised quantitative questionnaire tool. The tool - originally developed by information security governance task-force of EDUCAUSE - was customised for use in Uganda. Phase Two involved the use of a qualitative interview guide in a sessions between the investigator and respondents. Results show that institutions rely heavily on Information and Communication Technology (ICT) systems and services and that all institutions had already acquired more than three information systems and had acquired and implemented some of the cutting edge equipment and systems in their data centres. Further results show that institutions have established ICT departments although staff have not been trained in information security. All institutions interviewed have ICT policies although only a few have carried out policy sensitization and awareness campaigns for their staff and students. , TeX
- Full Text:
- Date Issued: 2012
- Authors: Kisakye, Alex
- Date: 2012 , 2012-11-06
- Subjects: Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4586 , http://hdl.handle.net/10962/d1004748 , Research and Educational Network of Uganda , Computer security -- Education (Higher) -- Uganda , Computer networks -- Security measures -- Education (Higher) -- Uganda , Management -- Computer network resources -- Education (Higher) -- Uganda , Computer hackers
- Description: Educational institutions are known to be at the heart of complex computing systems in any region in which they exist, especially in Africa. The existence of high end computing power, often connected to the Internet and to research network grids, makes educational institutions soft targets for attackers. Attackers of such networks are normally either looking to exploit the large computing resources available for use in secondary attacks or to steal Intellectual Property (IP) from the research networks to which the institutions belong. Universities also store a lot of information about their current students and staff population as well as alumni ranging from personal to financial information. Unauthorized access to such information violates statutory requirement of the law and could grossly tarnish the institutions name not to mention cost the institution a lot of money during post-incident activities. The purpose of this study was to investigate the information security practices that have been put in place by Research and Education Network of Uganda (RENU) member institutions to safeguard institutional data and systems from both internal and external security threats. The study was conducted on six member institutions in three phases, between the months of May and July 2011 in Uganda. Phase One involved the use of a customised quantitative questionnaire tool. The tool - originally developed by information security governance task-force of EDUCAUSE - was customised for use in Uganda. Phase Two involved the use of a qualitative interview guide in a sessions between the investigator and respondents. Results show that institutions rely heavily on Information and Communication Technology (ICT) systems and services and that all institutions had already acquired more than three information systems and had acquired and implemented some of the cutting edge equipment and systems in their data centres. Further results show that institutions have established ICT departments although staff have not been trained in information security. All institutions interviewed have ICT policies although only a few have carried out policy sensitization and awareness campaigns for their staff and students. , TeX
- Full Text:
- Date Issued: 2012
An investigation into the control of audio streaming across networks having diverse quality of service mechanisms
- Authors: Foulkes, Philip James
- Date: 2012
- Subjects: Streaming audio -- Testing Data transmission systems -- Testing Computer networks -- Management Computer networks -- Evaluation Computer network protocols -- Standards
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4607 , http://hdl.handle.net/10962/d1004865
- Description: The transmission of realtime audio data across digital networks is subject to strict quality of service requirements. These networks need to be able to guarantee network resources (e.g., bandwidth), ensure timely and deterministic data delivery, and provide time synchronisation mechanisms to ensure successful transmission of this data. Two open standards-based networking technologies, namely IEEE 1394 and the recently standardised Ethernet AVB, provide distinct methods for achieving these goals. Audio devices that are compatible with IEEE 1394 networks exist, and audio devices that are compatible with Ethernet AVB networks are starting to come onto the market. There is a need for mechanisms to provide compatibility between the audio devices that reside on these disparate networks such that existing IEEE 1394 audio devices are able to communicate with Ethernet AVB audio devices, and vice versa. The audio devices that reside on these networks may be remotely controlled by a diverse set of incompatible command and control protocols. It is desirable to have a common network-neutral method of control over the various parameters of the devices that reside on these networks. As part of this study, two Ethernet AVB systems were developed. One system acts as an Ethernet AVB audio endpoint device and another system acts as an audio gateway between IEEE 1394 and Ethernet AVB networks. These systems, along with existing IEEE 1394 audio devices, were used to demonstrate the ability to transfer audio data between the networking technologies. Each of the devices is remotely controllable via a network neutral command and control protocol, XFN. The IEEE 1394 and Ethernet AVB devices are used to demonstrate the use of the XFN protocol to allow for network neutral connection management to take place between IEEE 1394 and Ethernet AVB networks. User control over these diverse devices is achieved via the use of a graphical patchbay application, which aims to provide a consistent user interface to a diverse range of devices.
- Full Text:
- Date Issued: 2012
- Authors: Foulkes, Philip James
- Date: 2012
- Subjects: Streaming audio -- Testing Data transmission systems -- Testing Computer networks -- Management Computer networks -- Evaluation Computer network protocols -- Standards
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4607 , http://hdl.handle.net/10962/d1004865
- Description: The transmission of realtime audio data across digital networks is subject to strict quality of service requirements. These networks need to be able to guarantee network resources (e.g., bandwidth), ensure timely and deterministic data delivery, and provide time synchronisation mechanisms to ensure successful transmission of this data. Two open standards-based networking technologies, namely IEEE 1394 and the recently standardised Ethernet AVB, provide distinct methods for achieving these goals. Audio devices that are compatible with IEEE 1394 networks exist, and audio devices that are compatible with Ethernet AVB networks are starting to come onto the market. There is a need for mechanisms to provide compatibility between the audio devices that reside on these disparate networks such that existing IEEE 1394 audio devices are able to communicate with Ethernet AVB audio devices, and vice versa. The audio devices that reside on these networks may be remotely controlled by a diverse set of incompatible command and control protocols. It is desirable to have a common network-neutral method of control over the various parameters of the devices that reside on these networks. As part of this study, two Ethernet AVB systems were developed. One system acts as an Ethernet AVB audio endpoint device and another system acts as an audio gateway between IEEE 1394 and Ethernet AVB networks. These systems, along with existing IEEE 1394 audio devices, were used to demonstrate the ability to transfer audio data between the networking technologies. Each of the devices is remotely controllable via a network neutral command and control protocol, XFN. The IEEE 1394 and Ethernet AVB devices are used to demonstrate the use of the XFN protocol to allow for network neutral connection management to take place between IEEE 1394 and Ethernet AVB networks. User control over these diverse devices is achieved via the use of a graphical patchbay application, which aims to provide a consistent user interface to a diverse range of devices.
- Full Text:
- Date Issued: 2012
Automated grid fault detection and repair
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Luyt, Leslie
- Date: 2012 , 2012-05-24
- Subjects: Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4670 , http://hdl.handle.net/10962/d1006693 , Computational grids (Computer systems) -- Maintenance and repair , Cloud computing -- Maintenance and repair , Computer architecture
- Description: With the rise in interest in the field of grid and cloud computing, it is becoming increasingly necessary for the grid to be easily maintainable. This maintenance of the grid and grid services can be made easier by using an automated system to monitor and repair the grid as necessary. We propose a novel system to perform automated monitoring and repair of grid systems. To the best of our knowledge, no such systems exist. The results show that certain faults can be easily detected and repaired. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
COIN : a customisable, incentive driven video on demand framework for low-cost IPTV services
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Musvibe, Ray
- Date: 2012 , 2012-03-02
- Subjects: Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4654 , http://hdl.handle.net/10962/d1006650 , Internet television , Digital television , Television broadcasting -- Technological innovations , Multicasting (Computer networks) , Video dial tone , Open source software , Telecommunication , Capital investments
- Description: There has been a significant rise in the provision of television and video services over IP (IPTV) in recent years. Increasing network capacity and falling bandwidth costs have made it both technically and economically feasible for service providers to deliver IPTV services. Several telecommunications (telco) operators worldwide are rolling out IPTV solutions and view IPTV as a major service differentiator and alternative revenue source. The main challenge that IPTV providers currently face, however, is the increasingly congested television service provider market, which also includes Internet Television. IPTV solutions therefore need strong service differentiators to succeed. IPTV solutions can doubtlessly sell much faster if they are more affordable or low-cost. Advertising has already been used in many service sectors to help lower service costs, including traditional broadcast television. This thesis therefore explores the role that advertising can play in helping to lower the cost of IPTV services and to incentivise IPTV billing. Another approach that IPTV providers can use to help sell their product is by addressing the growing need for control by today's multimedia users. This thesis will therefore explore the varied approaches that can be used to achieve viewer focused IPTV implementations. To further lower the cost of IPTV services, telcos can also turn to low-cost, open source platforms for service delivery. The adoption of low-cost infrastructure by telcos can lead to reduced Capital Expenditure (CAPEX), which in turn can lead to lower service fees, and ultimately to higher subscriptions and revenue. Therefore, in this thesis, the author proposes a CustOmisable, INcentive (COIN) driven Video on Demand (VoD) framework to be developed and deployed using the Mobicents Communication Platform, an open source service creation and execution platform. The COIN framework aims to provide a viewer focused, economically competitive service that combines the potential cost savings of using free and open source software (FOSS), with an innovative, incentive-driven billing approach. This project will also aim to evaluate whether the Mobicents Platform is a suitable service creation and execution platform for the proposed framework. Additionally, the proposed implementation aims to be interoperable with other IPTV implementations, hence shall follow current IPTV standardisation architectures and trends. The service testbed and its implementation are described in detail and only free and open source software is used; this is to enable its easy duplication and extension for future research. , TeX output 2012.03.02:1241 , Adobe Acrobat 9.2 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
Culturally-relevant augmented user interfaces for illiterate and semi-literate users
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Gavaza, Takayedzwa
- Date: 2012 , 2012-06-14
- Subjects: User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4665 , http://hdl.handle.net/10962/d1006679 , User interfaces (Computer systems) -- Research , Computer software -- Research , Graphical user interfaces (Computer systems) -- Research , Human-computer interaction , Computers and literacy
- Description: This thesis discusses guidelines for developers of Augmented User Interfaces that can be used by illiterate and semi-literate users. To discover how illiterate and semi-literate users intuitively understand interaction with a computer, a series of Wizard of Oz experiments were conducted. In the first Wizard of Oz study, users were presented with a standard desktop computer, fitted with a number of input devices to determine how they assume interaction should occur. This study found that the users preferred the use of speech and gestures which mirrored findings from other researchers. The study also found that users struggled to understand the tab metaphor which is used frequently in applications. From these findings, a localised culturally-relevant tab interface was developed to determine the feasibility of localised Graphical User Interface components. A second study was undertaken to compare the localised tab interface with the traditional tabbed interface. This study collected both quantitative and qualitative data from the participants. It found that users could interact with a localised tabbed interface faster and more accurately than with the traditional counterparts. More importantly, users stated that they intuitively understood the localised interface component, whereas they did not understand the traditional tab metaphor. These user studies have shown that the use of self-explanatory animations, video feedback, localised tabbed interface metaphors and voice output have a positive impact on enabling illiterate and semi-literate users to access information. , TeX , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
GPF : a framework for general packet classification on GPU co-processors
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
- Authors: Nottingham, Alastair
- Date: 2012
- Subjects: Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4661 , http://hdl.handle.net/10962/d1006662 , Graphics processing units , Coprocessors , Computer network protocols , Computer networks -- Security measures , NVIDIA Corporation
- Description: This thesis explores the design and experimental implementation of GPF, a novel protocol-independent, multi-match packet classification framework. This framework is targeted and optimised for flexible, efficient execution on NVIDIA GPU platforms through the CUDA API, but should not be difficult to port to other platforms, such as OpenCL, in the future. GPF was conceived and developed in order to accelerate classification of large packet capture files, such as those collected by Network Telescopes. It uses a multiphase SIMD classification process which exploits both the parallelism of packet sets and the redundancy in filter programs, in order to classify packet captures against multiple filters at extremely high rates. The resultant framework - comprised of classification, compilation and buffering components - efficiently leverages GPU resources to classify arbitrary protocols, and return multiple filter results for each packet. The classification functions described were verified and evaluated by testing an experimental prototype implementation against several filter programs, of varying complexity, on devices from three GPU platform generations. In addition to the significant speedup achieved in processing results, analysis indicates that the prototype classification functions perform predictably, and scale linearly with respect to both packet count and filter complexity. Furthermore, classification throughput (packets/s) remained essentially constant regardless of the underlying packet data, and thus the effective data rate when classifying a particular filter was heavily influenced by the average size of packets in the processed capture. For example: in the trivial case of classifying all IPv4 packets ranging in size from 70 bytes to 1KB, the observed data rate achieved by the GPU classification kernels ranged from 60Gbps to 900Gbps on a GTX 275, and from 220Gbps to 3.3Tbps on a GTX 480. In the less trivial case of identifying all ARP, TCP, UDP and ICMP packets for both IPv4 and IPv6 protocols, the effective data rates ranged from 15Gbps to 220Gbps (GTX 275), and from 50Gbps to 740Gbps (GTX 480), for 70B and 1KB packets respectively. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2012
Investigating tools and techniques for improving software performance on multiprocessor computer systems
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
- Authors: Tristram, Waide Barrington
- Date: 2012
- Subjects: Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4655 , http://hdl.handle.net/10962/d1006651 , Multiprocessors , Multiprogramming (Electronic computers) , Parallel programming (Computer science) , Linux , Abstract data types (Computer science) , Threads (Computer programs) , Computer programming
- Description: The availability of modern commodity multicore processors and multiprocessor computer systems has resulted in the widespread adoption of parallel computers in a variety of environments, ranging from the home to workstation and server environments in particular. Unfortunately, parallel programming is harder and requires more expertise than the traditional sequential programming model. The variety of tools and parallel programming models available to the programmer further complicates the issue. The primary goal of this research was to identify and describe a selection of parallel programming tools and techniques to aid novice parallel programmers in the process of developing efficient parallel C/C++ programs for the Linux platform. This was achieved by highlighting and describing the key concepts and hardware factors that affect parallel programming, providing a brief survey of commonly available software development tools and parallel programming models and libraries, and presenting structured approaches to software performance tuning and parallel programming. Finally, the performance of several parallel programming models and libraries was investigated, along with the programming effort required to implement solutions using the respective models. A quantitative research methodology was applied to the investigation of the performance and programming effort associated with the selected parallel programming models and libraries, which included automatic parallelisation by the compiler, Boost Threads, Cilk Plus, OpenMP, POSIX threads (Pthreads), and Threading Building Blocks (TBB). Additionally, the performance of the GNU C/C++ and Intel C/C++ compilers was examined. The results revealed that the choice of parallel programming model or library is dependent on the type of problem being solved and that there is no overall best choice for all classes of problem. However, the results also indicate that parallel programming models with higher levels of abstraction require less programming effort and provide similar performance compared to explicit threading models. The principle conclusion was that the problem analysis and parallel design are an important factor in the selection of the parallel programming model and tools, but that models with higher levels of abstractions, such as OpenMP and Threading Building Blocks, are favoured.
- Full Text:
- Date Issued: 2012
Web-based visualisation techniques for reporting zoonotic outbreaks
- Authors: Ncube, Sinini Paul
- Date: 2012
- Subjects: Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4664 , http://hdl.handle.net/10962/d1006672 , Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Description: Zoonotic diseases are diseases that are transmitted from animals or vectors to humans and vice versa. The public together with veterinarian authorities should readily access disease information as it is vital in rapidly controlling resultant zoonotic outbreak threats through improved awareness. Currently, the reporting of disease information in South Africa is predominantly limited to traditional methods of Information Communication Technologies (ICTs) like faxes, monthly newspaper reports, radios, phones and televisions. Although these are effective ways of communication, their disadvantage is that the information that most of them offer can only be accessed at specific times during a crisis. New technologies like the internet have become the most efficient way of distributing information in near-real-time. Many developed countries have used web-based reporting platforms to deliver timely information through temporal and geographic visualisation techniques. There has been an attempt in the use of web-based reporting in South Africa but most of these sites are characterised by heavy text which makes them time consuming to use or maintain. As a result most sites have not been updated or have ceased to exist because of the work load involved. The success of web reporting mechanisms in developed countries offers evidence that web-based reporting systems when appropriately visualised can improve the easy understanding of information and efficiency in the analysis of that data. In this thesis, a web-based reporting prototype was proposed after gathering information from different sources: literature related to disease reporting and the visualisation of infectious diseases; the exploration of the currently deployed web systems; and the investigation of user requirements from relevant parties. The proposed prototype system was then developed using Adobe Flash tools, Java and MySQL languages. A focus group then reviewed the developed system to ascertain that the relevant requirements had been incorporated and to obtain additional ideas about the system. This led to the proposal of a new prototype system that can be used by the authorities concerned as a plan to develop a fully functional disease reporting system for South Africa.
- Full Text:
- Date Issued: 2012
- Authors: Ncube, Sinini Paul
- Date: 2012
- Subjects: Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4664 , http://hdl.handle.net/10962/d1006672 , Zoonoses -- Reporting , Communicable diseases -- Reporting , Communication in medicine , Medical telematics , Internet , Information visualization
- Description: Zoonotic diseases are diseases that are transmitted from animals or vectors to humans and vice versa. The public together with veterinarian authorities should readily access disease information as it is vital in rapidly controlling resultant zoonotic outbreak threats through improved awareness. Currently, the reporting of disease information in South Africa is predominantly limited to traditional methods of Information Communication Technologies (ICTs) like faxes, monthly newspaper reports, radios, phones and televisions. Although these are effective ways of communication, their disadvantage is that the information that most of them offer can only be accessed at specific times during a crisis. New technologies like the internet have become the most efficient way of distributing information in near-real-time. Many developed countries have used web-based reporting platforms to deliver timely information through temporal and geographic visualisation techniques. There has been an attempt in the use of web-based reporting in South Africa but most of these sites are characterised by heavy text which makes them time consuming to use or maintain. As a result most sites have not been updated or have ceased to exist because of the work load involved. The success of web reporting mechanisms in developed countries offers evidence that web-based reporting systems when appropriately visualised can improve the easy understanding of information and efficiency in the analysis of that data. In this thesis, a web-based reporting prototype was proposed after gathering information from different sources: literature related to disease reporting and the visualisation of infectious diseases; the exploration of the currently deployed web systems; and the investigation of user requirements from relevant parties. The proposed prototype system was then developed using Adobe Flash tools, Java and MySQL languages. A focus group then reviewed the developed system to ascertain that the relevant requirements had been incorporated and to obtain additional ideas about the system. This led to the proposal of a new prototype system that can be used by the authorities concerned as a plan to develop a fully functional disease reporting system for South Africa.
- Full Text:
- Date Issued: 2012
μCloud : a P2P cloud platform for computing service provision
- Authors: Fouodji Tasse, Ghislain
- Date: 2012 , 2012-08-22
- Subjects: Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4663 , http://hdl.handle.net/10962/d1006669 , Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Description: The advancements in virtualization technologies have provided a large spectrum of computational approaches. Dedicated computations can be run on private environments (virtual machines), created within the same computer. Through capable APIs, this functionality is leveraged for the service we wish to implement; a computer power service (CPS). We target peer-to-peer systems for this service, to exploit the potential of aggregating computing resources. The concept of a P2P network is mostly known for its expanded usage in distributed networks for sharing resources like content files or real-time data. This study adds computing power to the list of shared resources by describing a suitable service composition. Taking into account the dynamic nature of the platform, this CPS provision is achieved using a self stabilizing clustering algorithm. So, the resulting system of our research is based around a hierarchical P2P architecture and offers end-to-end consideration of resource provisioning and reliability. We named this system μCloud and characterizes it as a self-provisioning cloud service platform. It is designed, implemented and presented in this dissertation. Eventually, we assessed our work by showing that μCloud succeeds in providing user-centric services using a P2P computing unit. With this, we conclude that our system would be highly beneficial in both small and massively deployed environments. , KMBT_223 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
- Authors: Fouodji Tasse, Ghislain
- Date: 2012 , 2012-08-22
- Subjects: Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4663 , http://hdl.handle.net/10962/d1006669 , Cloud computing , Peer-to-peer architecture (Computer networks) , Computer architecture , Computer service industry
- Description: The advancements in virtualization technologies have provided a large spectrum of computational approaches. Dedicated computations can be run on private environments (virtual machines), created within the same computer. Through capable APIs, this functionality is leveraged for the service we wish to implement; a computer power service (CPS). We target peer-to-peer systems for this service, to exploit the potential of aggregating computing resources. The concept of a P2P network is mostly known for its expanded usage in distributed networks for sharing resources like content files or real-time data. This study adds computing power to the list of shared resources by describing a suitable service composition. Taking into account the dynamic nature of the platform, this CPS provision is achieved using a self stabilizing clustering algorithm. So, the resulting system of our research is based around a hierarchical P2P architecture and offers end-to-end consideration of resource provisioning and reliability. We named this system μCloud and characterizes it as a self-provisioning cloud service platform. It is designed, implemented and presented in this dissertation. Eventually, we assessed our work by showing that μCloud succeeds in providing user-centric services using a P2P computing unit. With this, we conclude that our system would be highly beneficial in both small and massively deployed environments. , KMBT_223 , Adobe Acrobat 9.51 Paper Capture Plug-in
- Full Text:
- Date Issued: 2012
A framework for the application of network telescope sensors in a global IP network
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
- Authors: Irwin, Barry Vivian William
- Date: 2011
- Subjects: Sensor networks Computer networks TCP/IP (Computer network protocol) Internet Computer security Computers -- Access control Computer networks -- Security measures Computer viruses Malware (Computer software)
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: vital:4593 , http://hdl.handle.net/10962/d1004835
- Description: The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
- Full Text:
- Date Issued: 2011
A platform for computer-assisted multilingual literacy development
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
- Authors: Mudimba, Bwini Chizabubi
- Date: 2011
- Subjects: FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4600 , http://hdl.handle.net/10962/d1004850 , FundaWethu , Language acquisition -- Computer-assisted instruction , Language arts (Elementary) -- Computer-assisted instruction , Language and education , Education, Bilingual , Computer-assisted instruction , Educational technology , Computers and literacy
- Description: FundaWethu is reading software that is designed to deliver reading lessons to Grade R-3 (foundation phase) children who are learning to read in a multilingual context. Starting from a premise that the system should be both educative and entertaining, the system allows literacy researchers or teachers to construct rich multimedia reading lessons, with text, pictures (possibly animated), and audio files. Using the design-based research methodology which is problem driven and iterative, we followed a user-centred design process in creating FundaWethu. To promote sustainability of the software, we chose to bring teachers on board as “co-designers” using the lesson authoring tool. We made the authoring tool simple enough for use by non computer specialists, but expressive enough to enable a wide range of beginners reading exercises to be constructed in a number of different languages (indigenous South African languages in particular). This project therefore centred on the use of designbased research to build FundaWethu, the design and construction of FundaWethu and the usability study carried out to determine the adequacy of FundaWethu.
- Full Text:
- Date Issued: 2011
Bluetooth audio and video streaming on the J2ME platform
- Authors: Sahd, Curtis Lee
- Date: 2011 , 2010-09-09
- Subjects: Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4633 , http://hdl.handle.net/10962/d1006521 , Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Description: With the increase in bandwidth, more widespread distribution of media, and increased capability of mobile devices, multimedia streaming has not only become feasible, but more economical in terms of space occupied by the media file and the costs involved in attaining it. Although much attention has been paid to peer to peer media streaming over the Internet using HTTP and RTSP, little research has focussed on the use of the Bluetooth protocol for streaming audio and video between mobile devices. This project investigates the feasibility of Bluetooth as a protocol for audio and video streaming between mobile phones using the J2ME platform, through the analysis of Bluetooth protocols, media formats, optimum packet sizes, and the effects of distance on transfer speed. A comparison was made between RFCOMM and L2CAP to determine which protocol could support the fastest transfer speed between two mobile devices. The L2CAP protocol proved to be the most suitable, providing average transfer rates of 136.17 KBps. Using this protocol a second experiment was undertaken to determine the most suitable media format for streaming in terms of: file size, bandwidth usage, quality, and ease of implementation. Out of the eight media formats investigated, the MP3 format provided the smallest file size, smallest bandwidth usage, best quality and highest ease of implementation. Another experiment was conducted to determine the optimum packet size for transfer between devices. A tradeoff was found between packet size and the quality of the sound file, with highest transfer rates being recorded with the MTU size of 668 bytes (136.58 KBps). The class of Bluetooth transmitter typically used in mobile devices (class 2) is considered a weak signal and is adversely affected by distance. As such, the final investigation that was undertaken was aimed at determining the effects of distance on audio streaming and playback. As can be expected, when devices were situated close to each other, the transfer speeds obtained were higher than when devices were far apart. Readings were taken at varying distances (1-15 metres), with erratic transfer speeds observed from 7 metres onwards. This research showed that audio streaming on the J2ME platform is feasible, however using the currently available class of Bluetooth transmitter, video streaming is not feasible. Video files were only playable once the entire media file had been transferred.
- Full Text:
- Date Issued: 2011
- Authors: Sahd, Curtis Lee
- Date: 2011 , 2010-09-09
- Subjects: Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4633 , http://hdl.handle.net/10962/d1006521 , Bluetooth technology , Mobile communication systems , Communication -- Technological innovations , Communication -- Network analysis , Wireless communication systems , L2TP (Computer network protocol) , Computer network protocols , Streaming audio , Streaming video
- Description: With the increase in bandwidth, more widespread distribution of media, and increased capability of mobile devices, multimedia streaming has not only become feasible, but more economical in terms of space occupied by the media file and the costs involved in attaining it. Although much attention has been paid to peer to peer media streaming over the Internet using HTTP and RTSP, little research has focussed on the use of the Bluetooth protocol for streaming audio and video between mobile devices. This project investigates the feasibility of Bluetooth as a protocol for audio and video streaming between mobile phones using the J2ME platform, through the analysis of Bluetooth protocols, media formats, optimum packet sizes, and the effects of distance on transfer speed. A comparison was made between RFCOMM and L2CAP to determine which protocol could support the fastest transfer speed between two mobile devices. The L2CAP protocol proved to be the most suitable, providing average transfer rates of 136.17 KBps. Using this protocol a second experiment was undertaken to determine the most suitable media format for streaming in terms of: file size, bandwidth usage, quality, and ease of implementation. Out of the eight media formats investigated, the MP3 format provided the smallest file size, smallest bandwidth usage, best quality and highest ease of implementation. Another experiment was conducted to determine the optimum packet size for transfer between devices. A tradeoff was found between packet size and the quality of the sound file, with highest transfer rates being recorded with the MTU size of 668 bytes (136.58 KBps). The class of Bluetooth transmitter typically used in mobile devices (class 2) is considered a weak signal and is adversely affected by distance. As such, the final investigation that was undertaken was aimed at determining the effects of distance on audio streaming and playback. As can be expected, when devices were situated close to each other, the transfer speeds obtained were higher than when devices were far apart. Readings were taken at varying distances (1-15 metres), with erratic transfer speeds observed from 7 metres onwards. This research showed that audio streaming on the J2ME platform is feasible, however using the currently available class of Bluetooth transmitter, video streaming is not feasible. Video files were only playable once the entire media file had been transferred.
- Full Text:
- Date Issued: 2011
OVR : a novel architecture for voice-based applications
- Authors: Maema, Mathe
- Date: 2011 , 2011-04-01
- Subjects: Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4671 , http://hdl.handle.net/10962/d1006694 , Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Description: Despite the inherent limitation of accessing information serially, voice applications are increasingly growing in popularity as computing technologies advance. This is a positive development, because voice communication offers a number of benefits over other forms of communication. For example, voice may be better for delivering services to users whose eyes and hands may be engaged in other activities (e.g. driving) or to semi-literate or illiterate users. This thesis proposes a knowledge based architecture for building voice applications to help reduce the limitations of serial access to information. The proposed architecture, called OVR (Ontologies, VoiceXML and Reasoners), uses a rich backend that represents knowledge via ontologies and utilises reasoning engines to reason with it, in order to generate intelligent behaviour. Ontologies were chosen over other knowledge representation formalisms because of their expressivity and executable format, and because current trends suggest a general shift towards the use of ontologies in many systems used for information storing and sharing. For the frontend, this architecture uses VoiceXML, the emerging, and de facto standard for voice automated applications. A functional prototype was built for an initial validation of the architecture. The system is a simple voice application to help locate information about service providers that offer HIV (Human Immunodeficiency Virus) testing. We called this implementation HTLS (HIV Testing Locator System). The functional prototype was implemented using a number of technologies. OWL API, a Java interface designed to facilitate manipulation of ontologies authored in OWL was used to build a customised query interface for HTLS. Pellet reasoner was used for supporting queries to the knowledge base and Drools (JBoss rule engine) was used for processing dialog rules. VXI was used as the VoiceXML browser and an experimental softswitch called iLanga as the bridge to the telephony system. (At the heart of iLanga is Asterisk, a well known PBX-in-a-box.) HTLS behaved properly under system testing, providing the sought initial validation of OVR. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2011
- Authors: Maema, Mathe
- Date: 2011 , 2011-04-01
- Subjects: Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4671 , http://hdl.handle.net/10962/d1006694 , Telephone systems -- Research , User interfaces (Computer systems) -- Research , Expert systems (Computer science) , Artificial intelligence , VoiceXML (Document markup language) , Application software -- Development
- Description: Despite the inherent limitation of accessing information serially, voice applications are increasingly growing in popularity as computing technologies advance. This is a positive development, because voice communication offers a number of benefits over other forms of communication. For example, voice may be better for delivering services to users whose eyes and hands may be engaged in other activities (e.g. driving) or to semi-literate or illiterate users. This thesis proposes a knowledge based architecture for building voice applications to help reduce the limitations of serial access to information. The proposed architecture, called OVR (Ontologies, VoiceXML and Reasoners), uses a rich backend that represents knowledge via ontologies and utilises reasoning engines to reason with it, in order to generate intelligent behaviour. Ontologies were chosen over other knowledge representation formalisms because of their expressivity and executable format, and because current trends suggest a general shift towards the use of ontologies in many systems used for information storing and sharing. For the frontend, this architecture uses VoiceXML, the emerging, and de facto standard for voice automated applications. A functional prototype was built for an initial validation of the architecture. The system is a simple voice application to help locate information about service providers that offer HIV (Human Immunodeficiency Virus) testing. We called this implementation HTLS (HIV Testing Locator System). The functional prototype was implemented using a number of technologies. OWL API, a Java interface designed to facilitate manipulation of ontologies authored in OWL was used to build a customised query interface for HTLS. Pellet reasoner was used for supporting queries to the knowledge base and Drools (JBoss rule engine) was used for processing dialog rules. VXI was used as the VoiceXML browser and an experimental softswitch called iLanga as the bridge to the telephony system. (At the heart of iLanga is Asterisk, a well known PBX-in-a-box.) HTLS behaved properly under system testing, providing the sought initial validation of OVR. , LaTeX with hyperref package
- Full Text:
- Date Issued: 2011
A proxy approach to protocol interoperability within digital audio networks
- Authors: Igumbor, Osedum Peter
- Date: 2010
- Subjects: Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4601 , http://hdl.handle.net/10962/d1004852 , Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Description: Digital audio networks are becoming the preferred solution for the interconnection of professional audio devices. Prominent amongst their advantages are: reduced noise interference, signal multiplexing, and a reduction in the number of cables connecting networked devices. In the context of professional audio, digital networks have been used to connect devices including: mixers, effects units, preamplifiers, breakout boxes, computers, monitoring controllers, and synthesizers. Such networks are governed by protocols that define the connection management rocedures, and device synchronization processes of devices that conform to the protocols. A wide range of digital audio network control protocols exist, each defining specific hardware requirements of devices that conform to them. Device parameter control is achieved by sending a protocol message that indicates the target parameter, and the action that should be performed on the parameter. Typically, a device will conform to only one protocol. By implication, only devices that conform to a specific protocol can communicate with each other, and only a controller that conforms to the protocol can control such devices. This results in the isolation of devices that conform to disparate protocols, since devices of different protocols cannot communicate with each other. This is currently a challenge in the professional music industry, particularly where digital networks are used for audio device control. This investigation seeks to resolve the issue of interoperability between professional audio devices that conform to different digital audio network protocols. This thesis proposes the use of a proxy that allows for the translation of protocol messages, as a solution to the interoperability problem. The proxy abstracts devices of one protocol in terms of another, hence allowing all the networked devices to appear as conforming to the same protocol. The proxy receives messages on behalf of the abstracted device, and then fulfills them in accordance with the protocol that the abstracted device conforms to. Any number of protocol devices can be abstracted within such a proxy. This has the added advantage of allowing a common controller to control devices that conform to the different protocols.
- Full Text:
- Date Issued: 2010
- Authors: Igumbor, Osedum Peter
- Date: 2010
- Subjects: Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4601 , http://hdl.handle.net/10962/d1004852 , Digital communications , Local area networks (Computer networks) , Computer sound processing , Computer networks , Computer network protocols
- Description: Digital audio networks are becoming the preferred solution for the interconnection of professional audio devices. Prominent amongst their advantages are: reduced noise interference, signal multiplexing, and a reduction in the number of cables connecting networked devices. In the context of professional audio, digital networks have been used to connect devices including: mixers, effects units, preamplifiers, breakout boxes, computers, monitoring controllers, and synthesizers. Such networks are governed by protocols that define the connection management rocedures, and device synchronization processes of devices that conform to the protocols. A wide range of digital audio network control protocols exist, each defining specific hardware requirements of devices that conform to them. Device parameter control is achieved by sending a protocol message that indicates the target parameter, and the action that should be performed on the parameter. Typically, a device will conform to only one protocol. By implication, only devices that conform to a specific protocol can communicate with each other, and only a controller that conforms to the protocol can control such devices. This results in the isolation of devices that conform to disparate protocols, since devices of different protocols cannot communicate with each other. This is currently a challenge in the professional music industry, particularly where digital networks are used for audio device control. This investigation seeks to resolve the issue of interoperability between professional audio devices that conform to different digital audio network protocols. This thesis proposes the use of a proxy that allows for the translation of protocol messages, as a solution to the interoperability problem. The proxy abstracts devices of one protocol in terms of another, hence allowing all the networked devices to appear as conforming to the same protocol. The proxy receives messages on behalf of the abstracted device, and then fulfills them in accordance with the protocol that the abstracted device conforms to. Any number of protocol devices can be abstracted within such a proxy. This has the added advantage of allowing a common controller to control devices that conform to the different protocols.
- Full Text:
- Date Issued: 2010
Network management for community networks
- Authors: Wells, Daniel David
- Date: 2010 , 2010-03-26
- Subjects: Computer networks -- Management , Internet -- South Africa , Internet -- Management , Broadband communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4643 , http://hdl.handle.net/10962/d1006587
- Description: Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
- Full Text:
- Date Issued: 2010
- Authors: Wells, Daniel David
- Date: 2010 , 2010-03-26
- Subjects: Computer networks -- Management , Internet -- South Africa , Internet -- Management , Broadband communication systems
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4643 , http://hdl.handle.net/10962/d1006587
- Description: Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
- Full Text:
- Date Issued: 2010
Visual based finger interactions for mobile phones
- Authors: Kerr, Simon
- Date: 2010 , 2010-03-15
- Subjects: User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4652 , http://hdl.handle.net/10962/d1006621 , User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Description: Vision based technology such as motion detection has long been limited to the domain of powerful processor intensive systems such as desktop PCs and specialist hardware solutions. With the advent of much faster mobile phone processors and memory, a plethora of feature rich software and hardware is being deployed onto the mobile platform, most notably onto high powered devices called smart phones. Interaction interfaces such as touchscreens allow for improved usability but obscure the phone’s screen. Since the majority of smart phones are equipped with cameras, it has become feasible to combine their powerful processors, large memory capacity and the camera to support new ways of interacting with the phone which do not obscure the screen. However, it is not clear whether or not these processor intensive visual interactions can in fact be run at an acceptable speed on current mobile handsets or whether they will offer the user a better experience than the current number pad and direction keys present on the majority of mobile phones. A vision based finger interaction technique is proposed which uses the back of device camera to track the user’s finger. This allows the user to interact with the mobile phone with mouse based movements, gestures and steering based interactions. A simple colour thresholding algorithm was implemented in Java, Python and C++. Various benchmarks and tests conducted on a Nokia N95 smart phone revealed that on current hardware and with current programming environments only native C++ yields results plausible for real time interactions (a key requirement for vision based interactions). It is also shown that different lighting levels and background environments affects the accuracy of the system with background and finger contrast playing a large role. Finally a user study was conducted to ascertain the overall user’s satisfaction between keypad interactions and the finger interaction techniques concluding that the new finger interaction technique is well suited to steering based interactions and in time, mouse style movements. Simple navigation is better suited to the directional keypad.
- Full Text:
- Date Issued: 2010
- Authors: Kerr, Simon
- Date: 2010 , 2010-03-15
- Subjects: User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4652 , http://hdl.handle.net/10962/d1006621 , User interfaces (Computer systems) , Mobile communication systems -- Design and construction , Cell phones -- Software , Mobile communication systems -- Technological innovations , Information display systems , Cell phones -- Technological innovations
- Description: Vision based technology such as motion detection has long been limited to the domain of powerful processor intensive systems such as desktop PCs and specialist hardware solutions. With the advent of much faster mobile phone processors and memory, a plethora of feature rich software and hardware is being deployed onto the mobile platform, most notably onto high powered devices called smart phones. Interaction interfaces such as touchscreens allow for improved usability but obscure the phone’s screen. Since the majority of smart phones are equipped with cameras, it has become feasible to combine their powerful processors, large memory capacity and the camera to support new ways of interacting with the phone which do not obscure the screen. However, it is not clear whether or not these processor intensive visual interactions can in fact be run at an acceptable speed on current mobile handsets or whether they will offer the user a better experience than the current number pad and direction keys present on the majority of mobile phones. A vision based finger interaction technique is proposed which uses the back of device camera to track the user’s finger. This allows the user to interact with the mobile phone with mouse based movements, gestures and steering based interactions. A simple colour thresholding algorithm was implemented in Java, Python and C++. Various benchmarks and tests conducted on a Nokia N95 smart phone revealed that on current hardware and with current programming environments only native C++ yields results plausible for real time interactions (a key requirement for vision based interactions). It is also shown that different lighting levels and background environments affects the accuracy of the system with background and finger contrast playing a large role. Finally a user study was conducted to ascertain the overall user’s satisfaction between keypad interactions and the finger interaction techniques concluding that the new finger interaction technique is well suited to steering based interactions and in time, mouse style movements. Simple navigation is better suited to the directional keypad.
- Full Text:
- Date Issued: 2010
A common analysis framework for simulated streaming-video networks
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
- Authors: Mulumba, Patrick
- Date: 2009
- Subjects: Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4590 , http://hdl.handle.net/10962/d1004828 , Computer networks -- Management , Streaming video , Mass media -- Technological innovations
- Description: Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
- Full Text:
- Date Issued: 2009
A grid based approach for the control and recall of the properties of IEEE 1394 audio devices
- Authors: Foulkes, Philip James
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4594 , http://hdl.handle.net/10962/d1004836 , IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Description: The control of modern audio studios is complex. Audio mixing desks have grown to the point where they contain thousands of parameters. The control surfaces of these devices do not reflect the routing and signal processing capabilities that the devices are capable of. Software audio mixing desk editors have been developed that allow for the remote control of these devices, but their graphical user interfaces retain the complexities of the audio mixing desk that they represent. In this thesis, we propose a grid approach to audio mixing. The developed grid audio mixing desk editor represents an audio mixing desk as a series of graphical routing matrices. These routing matrices expose the various signal processing points and signal flows that exist within an audio mixing desk. The routing matrices allow for audio signals to be routed within the device, and allow for the device’s parameters to be adjusted by selecting the appropriate signal processing points. With the use of the programming interfaces that are defined as part of the Studio Connections – Total Recall SDK, the audio mixing desk editor was integrated with compatible DAW applications to provide persistence of audio mixing desk parameter states. Many audio studios currently use digital networks to connect audio devices together. Audio and control signals are patched between devices through the use of software patchbays that run on computers. We propose a double grid-based FireWire patchbay aimed to simplify the patching of signals between audio devices on a FireWire network. The FireWire patchbay was implemented in such a way such that it can host software device editors that are Studio Connections compatible. This has allowed software device editors to be associated with the devices that are represented on the FireWire patchbay, thus allowing for studio wide control from a single application. The double grid-based patchbay was implemented such that it can be hosted by compatible DAW applications. Through this, the double grid-based patchbay application is able to provide the DAW application with the state of the parameters of the devices in a studio, as well as the connections between them. The DAW application may save this state data to its native song files. This state data may be passed back to the double grid-based patchbay when the song file is reloaded at a later stage. This state data may then be used by the patchbay to restore the parameters of the patchbay and its device editors to a previous state. This restored state may then be transferred to the hardware devices being represented by the patchbay.
- Full Text:
- Date Issued: 2009
- Authors: Foulkes, Philip James
- Date: 2009
- Subjects: IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4594 , http://hdl.handle.net/10962/d1004836 , IEEE 1394 (Standard) , Computer sound processing , Digital communications , Local area networks (Computer networks) , Sound -- Recording and reproducing -- Digital techniques , Computational grids (Computer systems)
- Description: The control of modern audio studios is complex. Audio mixing desks have grown to the point where they contain thousands of parameters. The control surfaces of these devices do not reflect the routing and signal processing capabilities that the devices are capable of. Software audio mixing desk editors have been developed that allow for the remote control of these devices, but their graphical user interfaces retain the complexities of the audio mixing desk that they represent. In this thesis, we propose a grid approach to audio mixing. The developed grid audio mixing desk editor represents an audio mixing desk as a series of graphical routing matrices. These routing matrices expose the various signal processing points and signal flows that exist within an audio mixing desk. The routing matrices allow for audio signals to be routed within the device, and allow for the device’s parameters to be adjusted by selecting the appropriate signal processing points. With the use of the programming interfaces that are defined as part of the Studio Connections – Total Recall SDK, the audio mixing desk editor was integrated with compatible DAW applications to provide persistence of audio mixing desk parameter states. Many audio studios currently use digital networks to connect audio devices together. Audio and control signals are patched between devices through the use of software patchbays that run on computers. We propose a double grid-based FireWire patchbay aimed to simplify the patching of signals between audio devices on a FireWire network. The FireWire patchbay was implemented in such a way such that it can host software device editors that are Studio Connections compatible. This has allowed software device editors to be associated with the devices that are represented on the FireWire patchbay, thus allowing for studio wide control from a single application. The double grid-based patchbay was implemented such that it can be hosted by compatible DAW applications. Through this, the double grid-based patchbay application is able to provide the DAW application with the state of the parameters of the devices in a studio, as well as the connections between them. The DAW application may save this state data to its native song files. This state data may be passed back to the double grid-based patchbay when the song file is reloaded at a later stage. This state data may then be used by the patchbay to restore the parameters of the patchbay and its device editors to a previous state. This restored state may then be transferred to the hardware devices being represented by the patchbay.
- Full Text:
- Date Issued: 2009