2024-03-29T06:00:44Zhttps://thescholarship.ecu.edu/oai/requestoai:TheScholarship.intra.ecu.edu:10342/17552021-03-03T20:52:51Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Knight, Daniel P.
2013-06-06T12:18:25Z
2014-07-31T12:06:28Z
2013
http://hdl.handle.net/10342/1755
Currently, there are many legacy enterprise software applications in active deployment that are outdated. These large legacy applications are rapidly becoming less practical for both the organizations they service, and for the organizations responsible for servicing them. Due to this problem, organizations utilizing legacy enterprise software applications are looking for feasible methods for overhauling them. This thesis establishes a process model for refining the initial concept associated with overhauling legacy enterprise software applications, and examines a case study of that process as applied to a real-world legacy software system. Â
Computer science
Computer engineering
Concept refinement
Overhaul
Process
Re-architect
Software development
Software engineering
Overhauling Legacy Enterprise Software Applications with a Concept Refinement Process Model
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/44832021-03-03T20:56:43Zcom_10342_41com_10342_1com_10342_6421col_10342_44col_10342_72col_10342_6422
Barber, Scott M.
2014-08-06T20:21:41Z
2015-08-06T06:30:12Z
2014
http://hdl.handle.net/10342/4483
Sustainability has been defined by the Brundtland Comission as "meeting the needs of present generations without compromising the needs of future generations." Seeking to become more sustainable, the City of Greenville, North Carolina developed a Municipal Operations Sustainability Plan, which presents the vision for the city and specific goals to become environmentally, socially, and economically sustainable. Reducing greenhouse gas emissions, reducing municipal electricity, reducing the use of potable water, increasing the number of street trees, and establishing a sustainability fund are among the goals that, when achieved, will help Greenville become more sustainable. The Public Work's Department has been working with Schneider Electric to reduce the amount of energy use in municipal buildings. Schneider Electric specializes in energy management, and is known worldwide for its success in saving energy and improving energy efficiency. Along with the work completed with Schneider Electric within the Public Work's Department, Greenville has joined ICLEI - Local Governments for Sustainability. ICLEI is a nonprofit, international, membership organization of cities, towns, and counties seeking to become more sustainable, addressing climate change and clean energy. ICLEI provides valuable resources, and experience and leadership for local governments to help save money, reduce energy use, and reduce greenhouse gas emissions. This research presents what Greenville has achieved thus far with Schneider Electric and ICLEI USA, and states what the next steps are to reach their goals, as well as evaluating other cities in North Carolina that are more sustainable in order to understand what can be achieved in Greenville.
Sustainability
Greenville (N.C.)
Public Works
Schneider Electric
ICLEI
North Carolina
City of Greenville, North Carolina - Sustainability
Undergraduate Thesis
oai:TheScholarship.intra.ecu.edu:10342/85912021-03-03T22:08:43Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Matta, Rekesh
2020-06-24T01:27:34Z
2020-06-24T01:27:34Z
2020-06-22
http://hdl.handle.net/10342/8591
It is challenging to predict environmental behaviors because of extreme events, such as heatwaves, typhoons, droughts, tsunamis, torrential downpour, wind ramps, or hurricanes. In this thesis, we proposed a novel framework to improve environmental model accuracy with a novel training approach. Extreme event detection algorithms are surveyed, selected, and applied in our proposed framework. The application of statistics in extreme events detection is quite diverse and leads to diverse formulations, which need to be designed for a specific problem. Each formula needs to be tailored specially to work with the available data in the given situation. This diversity is one of the driving forces of this research towards identifying the most common mixture of components utilized in the analysis of extreme events detection. Besides the extreme event detection algorithm, we also integrated the sliding window approach to see how well our models predict future events. To test the proposed framework, we collected coastal data from various sources and obtained the results; we improved the predictive accuracy of various machine learning models by 20% to 25% increase in R2 value using our approach. Apart from that, we organized the discussion along with different extreme event detection types, presented a few outlier definitions, and briefly introduced their techniques. We also summarized the statistical methods involved in the detection of environmental extremes, such as wind ramps and climatic events.
ENVIRONMENTAL MODEL ACCURACY IMPROVEMENT FRAMEWORK USING STATISTICAL TECHNIQUES AND A NOVEL TRAINING APPROACH
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/58762022-01-20T16:03:56Zcom_10342_107com_10342_73com_10342_41com_10342_1col_10342_5056col_10342_44
Kim, Sunghan, 1975-
Noor, Fouzia
Aboy, Mateo
McNames, James
2016-08-25T13:20:17Z
2016-08-25T13:20:17Z
2016-08-11
BioMedical Engineering OnLine. 2016 Aug 11;15(1):94
http://dx.doi.org/10.1186/s12938-016-0214-x
http://hdl.handle.net/10342/5876
10.1186/s12938-016-0214-x
Background: We describe the first automatic algorithm designed to estimate the pulse pressure variation (PPVPPV) from arterial blood pressure (ABP) signals under spontaneous breathing conditions. While currently there are a few publicly available algorithms to automatically estimate PPVPPV accurately and reliably in mechanically ventilated subjects, at the moment there is no automatic algorithm for estimating PPVPPV on spontaneously breathing subjects. The algorithm utilizes our recently developed sequential Monte Carlo method (SMCM), which is called a maximum a-posteriori adaptive marginalized particle filter (MAM-PF). We report the performance assessment results of the proposed algorithm on real ABP signals from spontaneously breathing subjects.
Results: Our assessment results indicate good agreement between the automatically estimated PPVPPV and the gold standard PPVPPV obtained with manual annotations. All of the automatically estimated PPVPPV index measurements (PPVautoPPVauto) were in agreement with manual gold standard measurements (PPVmanuPPVmanu) within ±4 % accuracy.
Conclusion: The proposed automatic algorithm is able to give reliable estimations of PPVPPV given ABP signals alone during spontaneous breathing.
en_US
en
The Author(s)
Extended Kalman filter
A-posteriori distribution
Maximum a-posteriori estimation
Marginalized particle filter
Multi-harmonic signal
A novel particle filtering method for estimation of pulse pressure variation during spontaneous breathing
Article
oai:TheScholarship.intra.ecu.edu:10342/88032023-11-22T16:23:06Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Childers, Logan
2020-12-18T15:48:08Z
2020-12-18T15:48:08Z
2020-11-16
http://hdl.handle.net/10342/8803
Gopala-Hemachandra codes are a variation of the Fibonacci universal code and have applications in data compression and cryptography. We study a specific parameterization of Gopala-Hemachandra codes and present several results pertaining to these codes. We show that GH_{a}(n) always exists for any n >= 1, when -2 >= a >= -4, meaning that these are universal codes. We develop two new algorithms to determine whether a GH code exists for a given a and n, and to construct them if they exist. We also prove that when a = -(4+k), where k >= 1, that there are at most k consecutive integers for which GH codes do not exist. In 2014, Nalli and Ozyilmaz proposed a stream cipher based on GH codes. We show that this cipher is insecure and provide experimental results on the performance of our program that cracks this cipher.
en
Zeckendorf Representation
Gopala-Hemachandra Codes
Data Compression
Fibonacci Code
Stream Ciphers
Cryptanalysis
Studies on Gopala-Hemachandra Codes and their Applications
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/105462022-04-08T07:16:02Zcom_10342_41com_10342_1col_10342_45
Das, Kanchan
2022-04-08T01:15:31Z
2022-04-08T01:15:31Z
2020-02-12
2579-9363
http://hdl.handle.net/10342/10546
10.31387/OSCM0400253
en_US
environmental and economic sustainability
closed-loop supply chain
component recovery
Planning Environmental and Economic Sustainability in Closed-Loop Supply Chains
Article
oai:TheScholarship.intra.ecu.edu:10342/128362023-06-05T13:47:50Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
McMorris, Jason
2023-06-05T13:47:50Z
2023-06-05T13:47:50Z
2023-05-03
http://hdl.handle.net/10342/12836
Having a reliable supply of fresh water is a problem that affects nations around the world. Saltwater desalination is one of the best methods for fulfilling this need, but it is an energy-intensive process that is expensive to maintain. Wave energy can be utilized to increase the efficiency of seawater desalination using a wave energy converter (WEC) to lower the external energy requirement. This thesis presents an analysis of scaled down flap-type oscillating surge wave energy converter (OSWEC) geometries and their effects on the power output. The performance of the OSWEC was tested using different flap shapes in addition to different configurations of thickness, density, and center of mass. The tested wave conditions were based on scaled down wave conditions at Jennette's Pier in Nag's Head, North Carolina, and used a significant wave height of 0.117m and a natural period of 1.68s. The system's power take-off (PTO) was also manipulated using different damping and stiffness coefficients to maximize the power generated from the OSWEC. The results of the wave simulations showed that the thinnest configuration of the variable thickness cylindrical flap shape, with the highest tested density and center of mass, produced the most power using the given wave conditions with an average power output of 30.11W.
en
Renewable energy
Wave energy converter
oscillating surge wave energy converter
geometry optimization
OSCILLATING SURGE WAVE ENERGY CONVERTER GEOMETRY OPTIMIZATION FOR DIRECT SEAWATER DESALINATION
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/105532022-04-08T07:16:02Zcom_10342_41com_10342_1col_10342_45
Das, Kanchan
2022-04-08T01:50:35Z
2022-04-08T01:50:35Z
2019-04-01
2455-7749
http://hdl.handle.net/10342/10553
10.33889/ijmems.2019.4.2-022
en_US
Sustainable food supply chain
Food collection cooperatives
Resilience criteria
Integrating Lean, Green, and Resilience Criteria in a Sustainable Food Supply Chain Planning Model
Article
oai:TheScholarship.intra.ecu.edu:10342/36312021-03-03T20:55:22Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Bazargani, Sahar
2011-08-22T15:13:04Z
2013-08-31T12:06:13Z
2011
http://hdl.handle.net/10342/3631
Overwhelming the users with large amount of information on the Web has resulted in users' inability to find the information and their dissatisfaction with available information searching and filtering systems. On the other hand, the information is distributed over many websites and a large part of it (for example news) is updated frequently. Keeping track of the changes in huge amount of information is a real problem for users.   Due to the great impact the information has on people's lives and business decision-making, much research has been done on the efficient ways of accessing and analyzing the information. This thesis will propose a conceptual classification method and ranking of the information in order to provide better user access to a wider range of information, it also provides the information that may help in analyzing the global trends in various fields. In order to demonstrate the effectiveness of this method, a feed aggregator system has been developed and evaluated through this thesis.   To improve the flexibility and adaptability of the system, we have adopted the agent-oriented software engineering architecture that has also helped facilitating the development process. In addition, since the system deals with storing and processing large amounts of information, that requires a large number of resources the cloud platform service has been used as a platform for deploying the application. The result was a cloud based software service that benefited from the unlimited on-demand resources.   To take advantage of the available features of public cloud computing platforms, those supporting the agent-oriented design, the multi-agent system was implemented by mapping the agents to the cloud computing services. In addition, the cloud queue service that is provided by some cloud providers such as Microsoft and Amazon was used to implement indirect communication among the agents in the multi-agent system. Â
Computer science
Agent-based software engineering
Cloud computing
Conceptual classification
Improving Access to Information through Conceptual Classification
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/124642023-04-12T14:55:28Zcom_10342_41com_10342_1col_10342_42
Wang, Ping
et al
2023-04-12T14:55:28Z
2023-04-12T14:55:28Z
2022-05-05
1932-6203
http://hdl.handle.net/10342/12464
10.1371/journal.pone.0267966
en_US
Warfarin
risk factor
critically ill patients
Warfarin Sensitivity is Associated with Increased Hospital Mortality in Critically Ill Patients
Article
oai:TheScholarship.intra.ecu.edu:10342/70602022-12-09T16:13:41Zcom_10342_41com_10342_1com_10342_122com_10342_11com_10342_2col_10342_42col_10342_123col_10342_15col_10342_8858
Smith, Davis B
2019-02-14T18:15:37Z
2020-12-01T09:01:54Z
2018-12-10
http://hdl.handle.net/10342/7060
The public perception of higher education, the culture of that institution, and its value to American citizens is changing. Taxpayer demands to downsize costly government expenditures, including government subsidizing of state supported educational institutions, have resulted in increased scrutiny of colleges and universities. Special programs have been reduced and in the case of post-secondary schools, there is increased pressure to find alternative funding sources and to increase tuition requirements. As a result, educational stakeholders have been forced to examine all aspects of institutional performance, especially numbers of graduating students. Though numerous theories suggest innovative ways to increase student success, college presidents face the reality of limited money to implement every success effort. More informed spending decisions might be possible by exploring an economic production function model to see what expenditures might produce better student success results at post-secondary institutions. This study examined four expenditure inputs - instructional support, academic support, institutional support, and student services support -, to determine whether there were any relationships between expenditure categories and graduation rates. My population included the 58 North Carolina Community College System (NCCCS) schools between the years of 2004-2014 using ordinary least squares regression to test my research question. The data for this study were collected from The Integrated Postsecondary Education Data System (IPEDS). The results of my study revealed there was no statistically significant relationship between individual expenditure category and graduation rates of those institutions for that time period.
en
Expenditures
The Relationship Between Categorical Expenditures and Graduation Rates at North Carolina Community Colleges
Doctoral Dissertation
oai:TheScholarship.intra.ecu.edu:10342/36302021-03-03T20:56:04Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Asghary Karahroudy, Ali
2011-08-22T15:13:03Z
2012-03-28T15:47:07Z
2011
http://hdl.handle.net/10342/3630
Cloud computing offers massive scalability, immediate availability, and low cost services as major benefits, but as with most new technologies, it introduces new risks and vulnerabilities too. Despite the fact that different cloud structures and services are expanding, the cloud computing penetration has not been as envisioned. Some specific concerns have stopped enterprises from completely joining the cloud. One of the major disadvantages of using cloud computing is its increased security risks. In this study I conduct an in depth analyses of the different aspects of security issues in cloud computing and propose a file distribution model as a possible solution to alleviate those security risks. It also shows the effectiveness of the new security model as compared with those currently being used. I present, a new file storage system with variable size chunks, distributed chunk addressing, decentralized file allocation tables, spread deciphering key, randomly selected file servers, and fault tolerant chunk system. Â
Computer science
Computer engineering
Availability
Confidentiality
Distributed file system
Integrity
Security
Security Analysis and Framework of Cloud Computing with Parity-Based Partially Distributed File System
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/74462022-01-20T16:04:18Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Williams, Patrick M
2019-08-21T18:51:09Z
2019-08-21T18:51:09Z
2019-07-02
http://hdl.handle.net/10342/7446
Age-associated cognitive decline (AACD) is a natural part of life. The difference between malignant and benign AACD can be difficult to determine in the early stages of dementia. Many factors affect an individual's brain changes throughout their life; therefore, the detection of dementia commonly requires longitudinal studies. By the time the symptoms of dementia manifest the damage to one's central nervous system is irreversible. The investigation of biomarkers for the early detection of dementia is ongoing. Electroencephalogram (EEG) research, along with other neuroimaging and clinical testing, has shown that it is possible to detect subtle changes to the central nervous system before the onset of behavioral changes due to dementia. In this research, a sequential imaging oddball paradigm that utilizes upright and inverted familiar and unfamiliar faces were used to scrutinize the effect of facial inversion throughout healthy adult aging. The results indicate that late event-related potentials such as the P300 and late positive potential may be biomarkers for the tracking of age-related changes. Additionally, it may be concluded that the oddball paradigm is not the optimal way to elicit the face inversion effect. Further research is recommended in order to develop conclusions which could not be determined due to limited population and sample size.
en
Mild Cognitive Impairment
Alzheimer's Disease
Processing
Spectral
Analysis
Biomarker
Oddball
P300
N170
N400
P600
Late Positive Potential
Fieldtrip
MATLAB
The facial inversion effect throughout healthy adult aging : an event-related brain potential study
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/110672022-09-10T07:16:03Zcom_10342_41com_10342_1col_10342_44
Sylcott, Brian
Lin, Chia-Cheng
Williams, Keith
Hinderaker, Mark
2022-09-09T12:46:47Z
2022-09-09T12:46:47Z
2021
2369-2529
http://hdl.handle.net/10342/11067
10.2196/24950
en_US
postural sway
virtual reality
force plate
Investigating the Use of Virtual Reality Headsets for Postural Control Assessment: Instrument Validation Study
Article
oai:TheScholarship.intra.ecu.edu:10342/37642021-03-03T20:54:35Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Zadeh Mohammadi, Mehdi
2012-01-18T20:16:34Z
2014-01-31T13:06:19Z
2011
http://hdl.handle.net/10342/3764
An effective process equipment monitoring tool widely accepted in manufacturing units today is overall equipment effectiveness (OEE). OEE began its debut as a pillar of the total productive maintenance (TPM) system, where the goals are to increase the reliability and availability of equipment so that resource waste is reduced and product quality is enhanced. Interest by a manufacturing company in North Carolina in evaluating OEE in terms of appropriateness in its application, along with a desire to explore other quality performance metrics that can be easily tracked to predict OEE, was the motivation behind this study. The goals of this study were:  1) To recommend to the manufacturing company definite steps that they should undertake to implement a robust OEE based equipment performance evaluation system  2) To demonstrate on a pilot basis how the implementation should be carried out, and   3) Study whether process capability which can be used as a leading quality indicator has any correlation to OEE which is a lagging indicator.   A framework was established for the implementation of OEE in a pilot area of the manufacturing unit. A systematic plan was proposed and implemented which demonstrated that it is possible to reverse the effects of an ineffective OEE measurement process and create an effective system to pursue continuous improvement. Success in this endeavor can be attributed to pursuing training at various levels. Another key factor in establishing the system was using an appropriate calculation method for OEE compatible to the understanding power of the company's workforce. Providing clear definitions that were easy to understand and interpret for all terms involved in the OEE calculation also played a key role in the success of the implementation. Recommendations on how to go about changing the company's culture to embrace the concept of OEE were provided and pursued. Use of OEE values for conducting personnel annual evaluations was stopped.   For exploring the correlation between process capability and OEE, the null hypothesis that there is no relation between process capability index and OEE, and between process capability index and each of OEE's three elements which are availability, performance and quality, was chosen. Calculating p-values for hypothesis testing, using non-linear regression analysis it was found that at a significance level of 0.05, the null hypothesis cannot be rejected for any of the four sub-hypothesis.   Limitations to the study included a short time period for the study and a lack of good available data. Another limitation was the fact that the final decision whether a part is good or bad was made by attempting to assemble the part in the final assembly operation. Further future work to this study would be to explore correlation between process capability and OEE in a controlled lab environment with more machines and parts and definite part specification limits. Â
Industrial engineering
Correlation analysis
Effectiveness
Equipment
Process capability
Implementation of a System for Monitoring Overall Equipment Effectiveness (OEE) and Exploring Correlation Between OEE and Process Capability
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/49662021-03-03T20:56:37Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Writtenberry, Robert William
2015-06-04T19:57:46Z
2015-06-04T19:57:46Z
2015
http://hdl.handle.net/10342/4966
The application programming interfaces supplied by Apple for developing applications in the Swift programming language on iOS devices provide limited support when it comes to declaring gesture recognizers outside of those simple ones currently provided. GestDefLS seeks to provide the service of allowing the developer to define custom, single- or multi-touch gesture recognizers. The language has the benefits of providing a concise, easy-to-understand language for declaring gestures, in addition to being readily compatible within Apple's Swift programming language. Furthermore, the language provides a higher level of modularity in terms of separating gesture recognition code from code pertaining to what the application should actually accomplish. Â
Computer science
DSL
Gestures
IOS
GestDefLS : A Gesture Definition Language in Swift
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/69442021-03-03T21:18:49Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Agarwal, Ritesh
2018-08-14T14:20:12Z
2018-08-14T14:20:12Z
2018-07-17
http://hdl.handle.net/10342/6944
Mixing efficiency is an important issue in the design of micromixers, since effective mixing is required between Deoxyribonucleic acid (DNA) sample and restriction enzyme for a fast digestion process. Mixing is improved by chaotic advection through serpentine mixing channels. This leads to the desired reduction in the fluid diffusion path while at the same time increasing the fluid contact areas. The purpose of this research is to evaluate mixing efficiency in microchannel mixers, through a numerical study of different micromixing configuration. To accomplish this, a numerical study is conducted using computational fluid dynamics (CFD) approach using ANSYS Fluent and CFX Software for different geometries designed. Different geometric configuration were proposed and used: bottleneck near the inlet and along the zig zag and curved shaped rectangular zig zag geometry. Mixing analysis is done by different conditions such as Reynold's number, effect of geometry on fluid flow and different diffusion coefficients by evaluating mixing index of the fluid. Results have shown better and faster mixing index around bottleneck region compared to other. This geometry can be used to model passive micromixers and other microfluidic devices with shorter mixing length and for faster mixing between reagents.
en
Numerical Analysis
passive micromixer
Passive micromixers for DNA analysis using CFD modelling
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/17562021-03-03T20:52:45Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Moster, Emil
2013-06-06T12:18:26Z
2013-06-06T12:18:26Z
2013
http://hdl.handle.net/10342/1756
System Development Life Cycles (SDLCs) for organizations are often based upon traditional software development models such as the waterfall model. These processes are complex, heavy in documentation deliverables, and are rigid and less flexible than other methods being used in modern software development. Consider by contrast, agile methods for software development. In essence, agile methods recommend lightweight documentation and simplified process. The focus shifts to completed software as the "measure of success" for delivery of product in software projects, versus accurate and comprehensive documentation, and the accomplishment of static milestones in a work breakdown structure. This thesis implements, explores, and recommends a hybrid agile approach to Scrum in order to satisfy the rigid, document-laden deliverables of a waterfall-based SDLC process. This hybrid Scrum is a balance of having enough documentation and process - but not too much - to meet SDLC deliverables, while at the same time focusing on timely product delivery and customer interactions that come from an agile approach to software development.
Information technology
Computer science
Agile
Development
Method
Process
Waterfall
USING HYBRID SCRUM TO MEET WATERFALL PROCESS DELIVERABLES
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/86082021-03-03T22:08:35Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Ludwich, Jacob M
2020-06-24T12:46:42Z
2020-06-30T08:01:54Z
2020-06-22
http://hdl.handle.net/10342/8608
Fabrication and Characterization of a Hydrogel-Nanofiber Composite for Cartilage Replacement
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/72842021-03-03T21:21:49Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Thomas, Sam
2019-06-12T20:06:51Z
2019-06-12T20:06:51Z
2019-05-02
http://hdl.handle.net/10342/7284
Adversarial machine learning has been an important area of study for the securing of machine learning systems. However, for every defense that is made to protect these artificial learners, a more sophisticated attack emerges to defeat it. This has created an arms race, with the problem of adversarial attacks never being fully mitigated. This thesis examines the field of adversarial machine learning; specifically, the property of transferability, and the use of dynamic defenses as a solution to attacks which leverage it. We show that this is an emerging field of research, which may be the solution to one of the most intractable problems in adversarial machine learning. We go on to implement a minimal experiment, demonstrating that research within this area is easily accessible. Finally, we address some of the hurdles to overcome in order to unify the disparate aspects of current related research.
en
adversarial machine learning
transferability
DYNAMIC DEFENSES AND THE TRANSFERABILITY OF ADVERSARIAL EXAMPLES
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/69102021-03-03T21:18:41Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Bly, Francis E
2018-08-14T13:17:02Z
2018-08-14T13:17:02Z
2018-05-18
http://hdl.handle.net/10342/6910
Commercial fishing is indisputably one of the most dangerous occupations in the United States and the world. Due to the hazardous work conditions such as bad weather, the commercial fishing industry is plagued with high amounts of fatal and non-fatal injuries. Safety training is mandatory, but resources are minimal to none in many areas of the United States. The aim of the research is to understand the risk perception of commercial fishermen. Understanding how commercial fishermen perceive risk can help to tailor safety training opportunities to utilize already limited resources. An increased awareness of hazards in other industries has shown to lower injuries from the specific hazard. Therefore an increased knowledge of the fishermen's perceptions of risk may provide valuable information to industry trainers to provide enhanced educational opportunities and training programs for the fishermen. Qualitative methods in the form of semi-structured interviews were conducted with fourteen commercial fishermen from the gulf coast of Florida and inner outer banks of North Carolina. A quantitative survey covering demographics and fishing experience was included with the interviews. Interviews were transcribed and reviewed using thematic analysis to establish reoccurring themes. Results from the research concluded that commercial fishing have heightened perceptions of risk. The fishermen interviewed were all aware of the high risks associated with their profession. Unfortunately common themes such as inexperienced workers, quotas/regulations, and drug use only increase the high risks. In the future training initiatives need to be focused on creating fishery specific courses, in order to make trainings as relevant to the work environment as possible. Continued research and safety interventions are needed to help lower fatality and injury rates in the commercial fishing industry.
en
Commercial fishing
Understanding the Risk Perception of Commercial Fishermen
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/38882022-12-09T16:33:16Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Fu, Tian
2012-05-20T15:25:14Z
2012-05-20T15:25:14Z
2012
http://hdl.handle.net/10342/3888
Wireless local area networks (WLAN) are one of the most widely used technologies in our daily lives. Instead of being limited to the range of wired equipment, users can communicate freely. However, since wireless networks are based on communication within radio channels, WLANs are susceptible to malicious attempts to block the channel. One of the most frequently used attacks is a Denial of Service (DoS) attack known as a jamming attack. Jamming attacks interfere with the transmission channels by constantly sending useless packets in order to disturb the communication between legitimate nodes. In real wireless networks where users communicate constantly, a jamming attack can cause serious problems. Because of this, a study of jamming attacks and how to prevent them is necessary. In this thesis, the jamming attacks were simulated in WLAN using OPNET Modeler, in order to provide a better understanding of jamming attacks. This study will be helpful for future research and development of a practical, effective way to avoid jamming attacks. The objectives of this thesis were to simulate client-server and ad-hoc networks and different jammers; launch jamming attacks in order to test how much influence different jammers have in WLAN communications; and to compare the performances of different ad-hoc routing protocols. Â
Information technology
Information science
Ad-hoc routing protocol
Jammer
Jamming attack
OPNET
Simulation
MODELLING [sic] AND SIMULATION OF JAMMING ATTACKS IN WLAN
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/85692021-03-03T22:08:32Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Chegireddy, Yashwanth Reddy
2020-06-24T01:05:45Z
2020-06-24T01:05:45Z
2020-06-22
http://hdl.handle.net/10342/8569
Recent advancements in the online course delivery for Distance Education(DE) pro- gram is evolving from day-to-day. Though DE programs cost less and also provide exibility to students depending on their pace of learning, delivering lab experiments to them is always a challenging task for the instructors. Few hands-on experiments need students to visit campus in order to perform them. But amateurs who never had training experience, frequent on-campus visits can be more cost-effective. We propose Model-View-Controller-Education-3D (MVC-E3D), a framework to develop virtual reality applications. This framework is similar to MVC architecture, but an additional component - Education (E) module, which has instructional content like lab manual, demo videos, quiz section and User Feedback System(UFS) required for training purposes, are added to the Model (M) component. Besides this, the framework illustrates how to broadcast course contents to students' machines remotely using Photon Unity Networking (PUN) plugin.To show how the proposed framework works, we have implemented a pre-hands-on experiment for ladder safety virtual reality training class, where an instructor hosts a 3D environment based construction lab experiment on a server allowing Distance Education (DE) students to access and finish the training. Hosting a server can reduce computational requirements for student computers. Each system component can be reused for other hands-on experiments. Students can gain experience in lab equipment and familiarize themselves with the preliminary steps before visiting on-campus lab training.
MVC EDUCATION 3D : A FRAMEWORK FOR DISTANCE EDUCATION VIRTUAL REALITY APPLICATIONS
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/88042021-08-31T13:52:09Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Isenhour, Leslie
2020-12-18T15:48:12Z
2020-12-18T15:48:12Z
2020-12-03
http://hdl.handle.net/10342/8804
What does it take to be a new leader in technology? The idea that it takes one specific attribute over another begs to question, what attributes are most important? The research within this thesis will argue that some attributes stand out among all leaders. This report will also explore a current and cultural perspective of female leadership in technology by spotlighting the successes of a select group of women. These successes will provide insight into additional traits that female technology leaders will have to exhibit to achieve the same level of success as their male counterparts. Finally, the report will summarize the quality attributes that will prepare tomorrow's female workforce to even the playing field in technological leadership roles.
en
Research of Leadership Qualities Exhibited by Female Leaders'
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/69082021-03-03T21:18:40Zcom_10342_41com_10342_1com_10342_30com_10342_122col_10342_44col_10342_35col_10342_124
Cruz-Molina, Genesis R.
2018-08-14T13:00:24Z
2020-08-01T08:01:52Z
2018-05-18
http://hdl.handle.net/10342/6908
Chronic pain affects approximately 100 million Americans annually. Heart rate variability and skin conductance have been used separately as measures of pain intensity. Current methods of assessing pain intensity have some limitations as they completely rely on subjective pain scales, require the patient's cooperation, and completely fail in unconscious patients. Therefore, there is a need for an objective method of measuring pain to improve the quality of pain management. Understanding the relationship between heart rate variability and skin conductance can be beneficial for non-pharmacological treatments of pain such as biofeedback training, as combining both signals can be used to create a more powerful tool to measure pain. To identify a relationship between skin conductance and heart rate variability, we propose a cross-correlation analysis. Such approach necessitates collection of baseline data on healthy college students, administration of a thermal stimuli, and collection of data during and after the stimuli.
en
heart rate variability
continuous decomposition analysis
cross-correlation
Identifying a cross-correlation between heart rate variability and skin conductance using pain intensity on healthy college students
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/74892021-03-03T21:23:32Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Patel, Himaniben P
2019-08-22T13:02:30Z
2020-05-01T08:01:53Z
2019-07-22
http://hdl.handle.net/10342/7489
The world, as we know it, is constructed in the form of knowledge. Our ancestors have passed their experiences to the next generation over time using handwritten documents. Although these old manuscripts are still available however, to disseminate that information to everyone, they must be converted into digital form. In the 21st century, the computers are becoming faster than ever before, thanks to the advancement of the fields of machine learning, deep learning, big data, cognitive computing and etc. A relationship between data may be found, which may, in turn, solves most of the problems. Cognitive computing can be used to deal with a vast amount of data to discovers hidden patterns or insights. Although research has explored many diverse, specific fields of application for cognitive computing, a comprehensive overview of the concept and its use is severely lacking. By leveraging the abilities of cognitive computing, text may be extracted from the handwritten documents in the form of images. The first part of the thesis focuses on the literature review of research papers related to applications of cognitive computing, collected from IEEE, ACM, and Springer databases. Currently, two companies provide cognitive computing services related to handwritten text recognition, Microsoft Azure's Computer Vision and Google Cloud's Vision AI. The second part focuses on conducting a performance analysis between these services based on some pre-defined criteria, where Microsoft Azure's Computer Vision service performed better overall for cursive English. Transkribus is a platform for automated recognition and transcription of archival documents, which uses a deep learning model to recognize text from an image. The third part focuses on analyzing the effectiveness of Microsoft Azure's Computer Vision service, by conducting performance analysis with Transkribus where images (collected from the Library of Congress with their transcribed text) were submitted. The results showed that Microsoft Azure's Computer vision service performed better compared to Transkribus. The last part focuses on increasing the accuracy of the Microsoft Azure's Computer Vision service by improving the quality of images. Various image pre-processing techniques were analyzed and applied to the dataset. Both improved and un-improved images were given as input to Microsoft Azure's Computer Vision service, and their results were evaluated, which showed that Microsoft Azure's Computer Vision's accuracy could increase for some images by improving the quality of the image.
en
Archival Document Processing
Archival Document Processing using Cognitive Computing
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/107202022-06-23T07:16:06Zcom_10342_41com_10342_1col_10342_44
Das, Kanchan
Lashkari, R. S.
Khan, Azizur R.
2022-06-22T17:44:33Z
2022-06-22T17:44:33Z
2021
1979-3561
http://hdl.handle.net/10342/10720
10.31387/oscm0440285
en
humanitarian logistics
rescue and relief operations
logistics in traffic-congested areas
A Humanitarian Logistics-Based Planning for Rescue and Relief Operation After a Devastating Fire Accident
Article
oai:TheScholarship.intra.ecu.edu:10342/37882021-03-03T20:52:36Zcom_10342_41com_10342_1col_10342_45
Lesko, Charles J. Jr.
Pickard, John L.
2012-03-28T13:45:27Z
2012-03-28T13:45:27Z
2010
Journal of Online Engineering Education; 1:2 p. 1-6
http://hdl.handle.net/10342/3788
For nearly two decades, the Web has provided the
classroom with vast, ever-expanding volumes of browseraccessible
information. As the web has evolved so too has
our desire to become more involved with the process of
content-creation and content-sharing. Now new web-based
technologies look to provide smarter, more meaningful
content and present that content with a new level of depth
and interactivity. No longer are faculty and students
browsing for information that is largely static; instead, these
users are interacting through their three-dimensional (3-D)
proxies (their avatars) and are querying applications
(semantic web agents) soliciting them to collect, filter, verify,
correlate, and present answers to their queries. Yet, all of
this capability is not without potential challenges.
There is an evolving need for faculty and students to find
and build out new structure in their 3-D virtual
surroundings that visually enables their content, making it
more palatable to the user while presenting it in a 3-D
format verses the typical 2-D format that has been the
mainstay for the past two decades. With the maturation of
virtual world (3-D Web) and semantic web technologies, the
web-based content available in the classroom increases
exponentially and takes on a new look. Following a brief
overview of these two technologies and their overall impact
in the classroom, this article presents several practical
approaches for presenting course content in 3-D Web
environments based on recent implementation efforts. In-
World lectures and lab assignments, project team briefing
sessions, student mentoring activities, and open conference
forums are just a few of the areas discussed. Further
discussions also focus on setup and future evaluation studies
planned in the near-term to further evaluate course content
presentation techniques.
en_US
Virtual worlds
3-D web
Collaboration
Communication
Content delivery
Data in Depth: Web 3-D Technologies Provide New Approaches to the Presentation of Course Content
Article
oai:TheScholarship.intra.ecu.edu:10342/71412021-03-03T21:20:44Zcom_10342_41com_10342_1col_10342_45
Hunter, Dustin
2019-04-05T00:00:06Z
2019-04-05T00:00:06Z
2019-04-04
http://hdl.handle.net/10342/7141
This paper will describe a data breach and how they impact the company and consumers involved. Statistics will be provided on the financial impact of a data breach to a company. Three recent data breach examples will be given along with a description of what personal identifiable information was compromised. I will also describe steps companies and consumers can take to mitigate the risk involved related to personal identifiable information.
en_US
Data Breach
Personal Identifiable Information
Data Breach Impacts on Companies and Their Consumers.
Working Papers
oai:TheScholarship.intra.ecu.edu:10342/72802021-03-03T21:21:47Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Ashayer, Alireza
2019-06-12T20:04:26Z
2020-01-23T09:02:00Z
2019-05-01
http://hdl.handle.net/10342/7280
With the introduction of Bitcoin in the year 2008 as the first practical decentralized cryptocurrency, the interest in cryptocurrencies and their underlying technology, Blockchain, has skyrocketed. Their promise of security, anonymity, and lack of a central controlling authority make them ideal for users who value their privacy. Academic research on machine learning, Blockchain technology, and their intersection have increased significantly in recent years. Specifically, one of the interest areas for researchers is the possibility of predicting the future prices of these cryptocurrencies using supervised machine learning techniques. In this thesis, we investigate their ability to make one day ahead price prediction of several popular cryptocurrencies using five widely used time-series prediction models. These models are designed by optimizing model parameters, such as activation functions, before settling on the final models presented in this thesis. Finally, we report the performance of each time-series prediction model measured by its mean squared error and accuracy in price movement direction prediction.
en
Time-series prediction, Bitcoin
Modeling and Prediction of Cryptocurrency Prices Using Machine Learning Techniques
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/67442021-03-03T21:17:49Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Das, Gourav
2018-05-25T17:31:17Z
2018-05-25T17:31:17Z
2018-05-02
http://hdl.handle.net/10342/6744
In the early 1990s, the Modi ed Condition/Decision Coverage (MC/DC) criterion was
suggested as a structural white-box testing approach, but it can also be used for blackbox
speci cation-based testing. Practical application of MC/DC for speci cationbased
testing has its own unique features and is sometimes quite di erent from codebased
applications. However, MC/DC as a black-box approach has not been studied
su ciently, and thus, the application of MC/DC for speci cation coverage was the
main research problem considered in this thesis. The goal of this study was to analyze
MC/DC as a black-box technique, investigate factors that distinguish black- and
white-box applications of this approach, and provide proper de nitions and rules with
a prototype implementation to evaluate the MC/DC level during black-box testing.
en
MC/DC
MC/DC COVERAGE FOR REQUIREMENTS SPECIFICATIONS
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/88142021-12-01T09:01:53Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Houshvand, Salar
2020-12-18T16:00:46Z
2021-12-01T09:01:53Z
2020-11-17
http://hdl.handle.net/10342/8814
Automated question generation is critical for realizing personalized learning. Also, learning research shows that answering questions is a more effective method than rereading the textbook multiple times. However, creating different types of questions is intellectually challenging and time-intensive. Therefore, it emphasizes a necessity for an automated way to generate questions and evaluate them. In this research after analyzing the existing approaches to automated question generation, we conclude that most of the current systems use natural language process techniques to extract questions from the text, therefore, other topics such as mathematics are lacking an automated question generation system that could help learners to assess their knowledge.In this research we present a novel framework that automatically generates unlimited numbers of questions for different topics in discrete mathematics. We created multiple algorithms for various questions in four main topics using Python. Our final product is presented as an application programming interface (API) using Flask library, which makes it easy to gain access and use this system in any future developments. Finally, we discuss the potential extensions that can be added to our framework as future contributions. The repository for this framework is freely available at https://github.com/SalarHoushvand/discrete-math-restfulAPI.
en
API
Auto Question Generation
Flask
A FRAMEWORK FOR AUTOMATICALLY GENERATING QUESTIONS FOR TOPICS IN DISCRETE MATHEMATICS
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/128562023-06-05T13:55:13Zcom_10342_41com_10342_1com_10342_122col_10342_43col_10342_124
Garay, Daniel Enrique
2023-06-05T13:55:13Z
2023-06-05T13:55:13Z
2023-05-04
http://hdl.handle.net/10342/12856
In the construction industry, workers are constantly exposed to hazards such as chemical exposures, falls from heights, and accidents involving large machinery. Construction sites contribute to generating threats to human life and property, making safety a priority. When safety is taken as a serious matter, accidents, fatalities, and property damage can be avoided. The performance of construction workers impacts projects and determines their quality and success in achieving project goals. In construction sites, an indicator for measuring safety performance is the safety attitude, which is understood as the individual’s attitudes and actions towards the workplace. Several factors can affect a worker's safety attitude, one of these factors is Fatigue. This study aimed to analyze the impact of fatigue on safety performance in construction workers.
To collect information, eighty workers were interviewed over eight weeks. The data collected were analyzed using a linear regression model, repeated ANOVA analysis of variance, and Friedman's rank sum test. Most significantly, the analysis revealed a correlation between the three scales used to measure fatigue (OFER, CIS, and FAS). Due to this correlation between scales, the investigation continued only with the analysis of the OFER scale. Further analysis, using linear regression models, showed a strong relationship between safety attitude and the OFER scale. As a result, Safety attitude significantly predicted fatigue levels in the construction workers. On the other hand, results showed that fatigue did not affect safety attitude, but safety attitude affected fatigue, at least for short periods.
In conclusion, a worker’s attitude in reaction to workplace safety might be influenced by a high number of variables, amongst these variables fatigue is our focus. Simultaneously, it is critical to comprehend additional aspects to build a safer workplace. The results of this study highlight the significance of encouraging a safety attitude culture at work, as this can have a big impact on workplace safety. The report also recommends that fatigue management programs must be introduced in the construction sector to enhance safety and lower the hazards related to fatigue.
en
Construction, Fatigue, Safety Attitude, Safety Performance, Construction Safety
THE IMPACT OF FATIGUE ON THE SAFETY PERFORMANCE OF CONSTRUCTION WORKERS: A LONGITUDINAL STUDY
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/44872021-03-03T20:56:44Zcom_10342_1com_10342_6421com_10342_41col_10342_72col_10342_6422col_10342_45
Wenzel, Kelsey A
2014-08-06T20:21:43Z
2015-08-06T06:30:13Z
2014
http://hdl.handle.net/10342/4487
Sustainability is achieved when social, environmental, and economic functions are operating in unity at maximum efficiency. This study takes a closer look at the sustainable knowledge of students at East Carolina University. An electronic survey of ten questions was developed and distributed through e-mail to approximately four hundred randomly selected ECU students. The data was then analyzed to identify whether or not there were patterns of knowledge in relation to gender, academic level, and college through which each student is pursuing their degree. The findings can be used to determine if there needs to be an increase in sustainable education at ECU and ultimately earn the university points towards being qualified as a sustainable institution by the standards of AASHE STARS.
Sustainability
East Carolina University
Sustainability literacy
A Sustainability Literacy Assessment of Students at East Carolina University
Undergraduate Thesis
oai:TheScholarship.intra.ecu.edu:10342/88562021-03-03T22:10:49Zcom_10342_41com_10342_1col_10342_45
RANGARAJAN, ANURADHA
2021-02-15T13:15:49Z
2021-02-15T13:15:49Z
2020-07
http://hdl.handle.net/10342/8856
Electronic Health Record (EHR) is a technology innovation which has the potential to offer valuable benefits to the healthcare industry such as improved quality of patient care and safety, optimization of healthcare workflow processes and availability of electronic data for clinical research. The implementation success of EHR is therefore significant to the healthcare industry in the United States and around the world. Prior studies in research literature have considered the impact of technology attributes, organizational learning attributes, and service attributes on information technology implementations in various other domains based on theories such as Theory of Reasoned Action (TRA), Theory of Planned Behavior (TRB) and Technology Acceptance Model (TAM), but none have considered their association with implementation success in a comprehensive manner within a single study pertaining to the healthcare domain as this study does. Hence, this study addresses an essential research gap. The approach used by this study in conducting the research based on a multi-factor research model (including the aforementioned attributes) is consistent with the general method used by academic researchers whereby the ability of a unique and selective list of factors to predict certain outcomes is leveraged. The data for this research study was collected using a questionnaire survey instrument based on the Likert scale. Structural Equation Modeling (SEM) was used for data analysis due to the presence of latent variables in the research model. The results of the statistical analyses support the hypotheses confirming positive associations between technology attributes (ease of use, result demonstrability, performance expectancy), organizational learning attributes (organizational learning capability, organizational absorptive capacity), service attributes (service-dominant orientation), and EHR implementation success. The results of this study are of importance to both academicians and practitioners.
en
Information Science; Information Technology; Health Care Management
Technology Attributes, Organizational Learning Attributes, Service Attributes, and Electronic Health Record Implementation Success
Thesis
oai:TheScholarship.intra.ecu.edu:10342/17572021-03-03T20:52:49Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Starov, Oleksii
2013-06-06T12:18:26Z
2013-06-06T12:18:26Z
2013
http://hdl.handle.net/10342/1757
Mobile application testing and testing over a cloud are two highly topical fields nowadays. Mobile testing presents specific test activities, including verification of an application against a variety of heterogeneous smartphone models and versions of operating systems (OS), build distribution and test team management, monitoring and user experience analytics of an application in production, etc. Cloud benefits are widely used to support all these activities. This study conducts in-depth analyses of existing cloud services for mobile testing and addresses their weaknesses regarding research purposes and testing needs of the critical and business-critical mobile applications.  During this study, a Cloud Testing of Mobile Systems (CTOMS) framework for effective research crowdsourcing in mobile testing was developed. The framework is presented as a lightweight and easily scalable distributed system that provides a cloud service to run tests on a variety of remote mobile devices. CTOMS provides implementation of two novel functionalities that are demanded by advanced investigations in mobile testing. First, it allows full multidirectional testing, which provides the opportunities to test an application on different devices and/or OS versions, and new device models or OS versions for their compatibility with the most popular applications in the market, or just legacy critical apps, etc. Second, CTOMS demonstrates the effective integration of the appropriate testing techniques for mobile development within such a service. In particular, it provides a user with suggestions about coverage of configurations to test on using combinatorial approaches like a base choice, pair-wise, and t-way. The current CTOMS version supports automated functional testing of Android applications and detection of defects in the user interface (UI). This has a great value because requirements for UI and user experience are high for any modern mobile application.   The fundamental analysis of possible test types and techniques using a system like CTOMS was conducted, and ways of possible enhancements and extensions of functionality for possible research are listed. The first case studies prove the work of implemented novel concepts, their usefulness, and their convenience for experiments in mobile testing. The overall work proves that a study of cloud mobile testing is feasible even with small research resources. Â
Computer science
Androids
Cloud testing
Combinatorial coverage
Device cloud
Mobile testing
Research crowdsourcing
Cloud Platform for Research Crowdsourcing in Mobile Testing
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/105372022-04-08T07:16:01Zcom_10342_41com_10342_1col_10342_45
Das, Kanchan
2022-04-08T01:11:17Z
2022-04-08T01:11:17Z
2018-08-19
2579-9363
http://hdl.handle.net/10342/10537
10.31387/oscm0350212
en_US
sustainability factors and practices
social responsibility
multi-objective mathematical model
Integrating Sustainability in the Design and Planning of Supply Chains
Article
oai:TheScholarship.intra.ecu.edu:10342/67692021-03-03T21:17:50Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Fathi, Ehsan
2018-05-25T18:03:04Z
2020-05-01T08:01:54Z
2018-04-27
http://hdl.handle.net/10342/6769
Image classification is the main task in image processing. Although, there were a lot of advances in recent years, it is still quite a challenge. On the other hand, due to the progress in technology, e-commerce has emerged as the fastest-growing sector of the U.S. marketplace. Product classification is an extremely important issue in e-commerce. In this work, we propose a scalable, flexible, practical, modular and efficient architecture to use image classification techniques for product classification just using product images. Considering the diversity of products offering in retail online retail stores it is not surprising that we confront an excessive number of classes. Case study is Cdiscount which is the biggest non-food e-commerce company in France which has made about 3 billion euros. As the trend of growing rate of this e-commerce shows they will have about 30 million products up for sale while they just had 10 million products until 2 years ago. As the next step to toward business expansion, they decided to employ image processing techniques. The structure of the dataset, diversity of the products and volume of it makes it unique between all the available public data sets. We focused on developing a CNN architecture to tackle this challenge and provide a more general, flexible, scalable and efficient solution for Cdiscount image classification business problem. Results of applying the proposed architecture shows a reasonable accuracy which shows the efficiency of the architecture. A comparison between proposed model and previous models is also provided.
en
product classification
Constitutional Neural Network
A Scalable Solution for Extreme Multi-class Product Classification: An E-commerce Case Study
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/45142021-03-03T20:58:59Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Brinkley, Julian
2016-02-15T19:36:53Z
2016-02-15T19:36:53Z
2014-08-28
http://hdl.handle.net/10342/4514
While social networking sites (SNSs) like Facebook are widely used and have been broadly studied, investigations of their use by individuals with visual impairments are scarce within the academic literature. Anecdotal complaints regarding their usability however can be found in abundance online; an extension of the well documented difficulty that users with visual impairments have in interacting with the web generally relative to the sighted. The investigation of this issue began with a pilot study of the online behavioral habits of 46 internet users; 26 of whom self-identified as having a visual impairment (either blind or low vision). This was followed by an ethnographic usability study of the Facebook mobile interface, involving six blind participants, using JAWS screen reading software on desktop computers. Of the features evaluated participants were most severely challenged by the process of creating a user profile and identifying other users with whom to establish relationships. A portable profile architecture based on semantic web technologies is presented as a potential solution that may improve usability by decoupling the profile and relationship maintenance activity from any single system.
Computer science
Blind
Facebook
Human Computer Interaction
Social Networking Site
Usability
Web Accessibility
Exploring the Usability Issues Encountered by Individuals with Visual Impairments on Social Networking Sites: Problem Description, System Evaluation and Semantic Web Solution
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/49992021-03-03T20:58:26Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Darafsheh, Kaveh
2015-08-24T16:58:52Z
2017-02-07T22:22:33Z
2015
http://hdl.handle.net/10342/4999
This thesis will compare and evaluate different approaches in integrating runtime monitors into processes running on a hard real-time operating system. The host system is a single board computer (SBC) with a VxWorks 653 hard real-time operating system henceforth referred to as a flight control computer (FCC). The FCC is an integrated modular avionics (IMA) system representative of actual flight computers. VxWorks 653 is based on the ARINC 653 standard and provides time and space partitioning for IMA systems.
Computer science
Runtime Monitoring On Hard Real-Time Operating Systems
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/60442021-03-03T21:09:36Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Vargas, Daniel E.
2017-01-11T22:10:24Z
2019-02-26T14:23:44Z
2016-12-13
http://hdl.handle.net/10342/6044
Using electrospun nanofiber scaffolds have emerged as a technique for tissue engineering (TE) applications. In 2011, Sullivan et al. reported on the process to effectively electrospin and crosslink nanofibers from poly(ethylene oxide) (PEO) and [beta]-lactoglobulin (BLG) aqueous solutions. PEO and BLG are both biodegradable and biocompatible materials. Crosslinking PEO/BLG nanofibers is necessary to improve their aqueous stability for TE applications. However, the heat treatment process suggested by Sullivan et al. is time intensive. The purpose of this study was to a) investigate an alternative crosslinking method for electrospun nanofibers made from an aqueous protein solution b) assess the resulting nanofibers for their potential use as scaffolds for TE applications, and c) evaluate the effect of biologically treated nanofiber scaffolds on stem cell proliferation. Chemical crosslinking techniques using Sodium Trimetaphosphate (STMP) combined with sodium hydroxide (NaOH) were evaluated. STMP has been shown to effectively crosslink polysaccharide nanofibers in situ during electrospinning. Methods: STMP, at various concentrations, was added to PEO/BLG electrospinning solutions. The effects of STMP were characterized by measuring the solution's viscosity, pH and conductivity. Confocal laser scanning microscopy (CLSM) images were acquired to qualitatively assess electrospun nanofiber morphology and scaffold topography. Human mesenchymal stem cells (hMSC) were grown on PEO/BLG scaffolds under control conditions and when treated with the protein Thymosin-[beta]4 (T[beta]4). HMSC proliferation was assessed to evaluate the effects of PEO/BLG nanofiber scaffolds and different T[beta]4 treatments at day 2, 4 and 8. Results: Using STMP to chemically crosslink PEO/BLG electrospun scaffolds affected solution properties, nanofiber morphology and scaffold topography. PEO/BLG/STMP nanofibers were highly beaded and wavy with little structure relative to PEO/BLG nanofibers. Fibers were not stable in an aqueous solution.Using T[beta]4 to treat the PEO/BLG nanofiber scaffolds and/or cell culture media improved hMSC proliferation with increased time in culture. HMSCs remained viable throughout the growth period for all treatments. However, hMSCs did not integrate into PEO/BLG nanofiber scaffolds, but attached to the scaffold surface. Conclusion: Using STMP, at the tested concentrations, as an alternative crosslinker for PEO/BLG nanofibers was ineffective and did not result in usable electrospun scaffolds. Chemically crosslinking PEO/BLG nanofibers requires further research in polymer chemistry to identify an alternative in situ crosslinking mechanism. Treating the scaffolds and/or media with T[beta]4 did result in improved hMSC proliferation. However, while hMSC cultures remained viable and proliferation increased with T[beta]4 treatments, further research is necessary to develop protocols that will enable hMSC integration with PEO/BLG nanofiber scaffolds.
en
HMSC
stem cells
Poly(ethylene Oxide)/β-lactoglobulin Electrospun Nanofibers: Chemical Crossliking Assessment and Thymosin-β4 Functionalization
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/106042022-05-05T13:21:56Zcom_10342_41com_10342_1col_10342_10600col_10342_45
Meadows, Sydney
2022-05-05T13:21:56Z
2022-05-05T13:21:56Z
2022-04-08
http://hdl.handle.net/10342/10604
The use of mobile devices in the workplace has grown significantly over the years. Most adults do not leave home without their phones, laptops, tablets, or other portable media devices. With this expansion, there needed to be a policy put in place to protect those devices from cybersecurity threats and threats that can occur. This is how the Mobile Device Management (MDM) Policy was created and has been implemented and most companies throughout the world. In this paper, I will go into depth about the MDM policy and why it is important in today's world of technology, especially within the workplace.
mobile device management
mobile device
Mobile Device Management Policy
Working Papers
oai:TheScholarship.intra.ecu.edu:10342/69852022-08-01T08:01:53Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Tate, Kinsley M
2018-08-14T15:28:06Z
2022-08-01T08:01:53Z
2018-07-23
http://hdl.handle.net/10342/6985
Autism is a genetically complex neurodevelopmental disorder in which patients exhibit social deficits in both verbal and non-verbal forms of communication and display restricted and repetitive behaviors. Approximately 1 in 68 children are diagnosed with Autism in the United States². The prevalence of Autism in North Carolina is even greater where 1 in 58 children are diagnosed³. Autism is thought to be influenced by both genetic and environmental factors. Complex interactions between these factors make the creation of therapeutic treatments difficult to achieve. One environmental factor that is being studied in relation to Autism is the anti-depressant Fluoxetine. Fetal exposure to Fluoxetine through maternal ingestion of the drug or consumption of drinking water where the drug is present is thought to interrupt normal fetal brain development. Fluoxetine has previously been show to increase dendritic spine formation, the main location of excitatory synapse development. However, the exact mechanism that causes this dysregulation of the actin cytoskeleton is not fully understood. Post-mortem samples from individuals with Autism also display increased dendritic spine levels. We hypothesize that Fluoxetine acts through the Rac1 pathway to increase dendritic spine density. To examine the impact of Fluoxetine on fetal synapse formation human cortical organoids, or 'mini-brains', were created to recapitulate the second trimester fetal brain. Once the 'mini-brains' reached the appropriate time point in development they were treated either acutely with Fluoxetine, chronically with Fluoxetine, with the Rac1 inhibitor NSC23766 or a combination of Fluoxetine and NSC23766. After 90 days in culture, the 'mini-brains' were harvested, fixed, cryosectioned and stained for pre- and post-synaptic markers. Using ImageJ excitatory synapse density and morphology was analyzed. It was determined that Fluoxetine caused enlargement of synapses that were irregular in shape. The effects of Fluoxetine on synapse formation were reduced when combined with the Rac1 inhibitor NSC23766. In addition to examining excitatory synapse formation, the effects of Fluoxetine and NSC23766 on electrical signal transmission was also observed using micro-electrode technology. Both Fluoxetine and NSC23766 were shown to decrease neuronal activity.
en
Synapse Formation
Role of antidepressants in fetal synapse formation in Autism Spectrum Disorders
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/106542023-11-02T16:31:20Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Wang, Lana
2022-06-09T19:11:25Z
2022-06-09T19:11:25Z
2022-04-28
http://hdl.handle.net/10342/10654
Mild cognitive impairment (MCI) is considered as the early stage of Alzheimer's disease, characterized as mild memory loss. Using electroencephalogram (EEG) data, a novel method of functional connectivity (FC) analysis can be used to detect MCI before memory is significantly impaired allowing for preventative measures to be taken. FC examines interactions between EEG channels to grant insight on underlying neural networks and can also allow for an examination of the effects of MCI on these neural networks. The FC method of weighted phase lag index (wPLI) provided insight on the link between the pathology of Alzheimer's disease and cognitive loss. wPLI was analyzed per frequency band (theta, alpha, mu, beta) and by channel combination groups (intra-hemispheric short, intra-hemispheric long, inter-hemispheric short, inter-hemispheric long, transverse). MCI was found to have a statistically significant lower [delta]wPLIP300 compared to normal controls in the mu intra-hemispheric short (p = 0.0286), mu intra-hemispheric long (p = 0.0477), mu inter-hemispheric short (p = 0.0018) and the alpha intra-hemispheric short (p = 0.0423). Results indicate a possible deficiency in the dorsal visual processing pathway among MCI subjects as well as an unbalanced coordination between the two hemispheres.
en
event related potentials
functional connectivity
Functional Connectivity Analysis of Visually Evoked ERPs for Mild Cognitive Impairment
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/122112023-02-08T08:16:41Zcom_10342_41com_10342_1col_10342_43
Wang, George
Zhu, Jiasheng
2023-02-07T20:51:20Z
2023-02-07T20:51:20Z
2022
0959-6526
http://hdl.handle.net/10342/12211
10.1016/j.jclepro.2022.134086
en_US
COVID-19
Nitrile gloves
Soil stabilisation
Reusing COVID-19 Disposable Nitrile Gloves to Improve the Mechanical Properties of Expansive Clay Subgrade: An Innovative Medical Waste Solution
Article
oai:TheScholarship.intra.ecu.edu:10342/128492023-06-05T13:53:42Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Sharon, Sone
2023-06-05T13:53:42Z
2023-06-05T13:53:42Z
2023-05-04
http://hdl.handle.net/10342/12849
Time series data is prevalent in many fields, such as finance, weather forecasting, and
economics. Predicting future values of a time series can provide valuable insights
for decision-making, such as identifying trends, detecting anomalies, and improving
resource allocation. Recently, Generative Adversarial Networks (GANs) have been
used to learn from these features to aid in time-series forecasting. We propose a novel
framework that utilizes the unsupervised paradigm of a GAN based on related research called TimeGAN. Instead of using the discriminator as a classification model,
we employ it as a regressive model to learn both temporal and static features. This
framework can help generate synthetic data and facilitate forecasting. Our model
outperforms TimeGAN, which only preserves temporal dynamics and uses the discriminator as a classifier to distinguish between synthetic and real datasets
en
Time-series data
Recurrent neural networks
Generative Adversarial Networks
Forecasting
Time Series Forecasting Using Generative Adversarial Networks
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/70502022-12-09T16:11:35Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_123
Mack, Dorothea M.
2019-02-14T18:11:07Z
2020-12-01T09:01:55Z
2018-12-10
http://hdl.handle.net/10342/7050
The purpose of this quantitative study was to determine if, and to what extent, racial and ethnic identity, years of experience, education level, age, gender, advisor status, and sexual orientation are related to multicultural competence among student affairs professionals who are responsible for advising racial and ethnic student organizations at predominantly white institutions (PWIs). Student organizations used for this study are distinguished by type: fraternities and sororities (Greek letter organizations); racial or ethnocultural advocacy and community organizations; and academic or social organizations. In order to elicit participants for this study, the researcher received a spreadsheet of 11,801 members from the historically known National Association of Student Personnel Administrators (NASPA). However, the association has updated its name to Student Affairs Administrators in Higher Education. Of these members, only 2,585 participants met the requirements necessary to participate in this study. The 2,585 NASPA members were invited by email to participate. Four hundred ninety participants attempted the survey, a response rate of 19.0%. Of that sample, there were 338 usable responses for analytic purposes. Multicultural competence was measured by the Multicultural Competence in Student Affairs-Preliminary 2 (MCSA-P2) Scale. The MCSA-P2 had excellent reliability ([alpha] = .93) for the sample of student affairs advisors. Data analysis of the mean, standard deviation, and internal consistency was conducted to evaluate responses. Basic descriptive statistics were used to analyze research question one. Research question two was analyzed using an Analysis of Variance (ANOVA) to measure mean differences between advisors of multicultural and other types of student organizations. Research question three was analyzed using multiple linear regression to display differences in advisors' multicultural competency by race/ethnicity, years of experience, level of education, gender, age, advisor status, and sexual orientation. The data analysis included the examination of the univariate statistics and revealed that race, sexual orientation, and advisor status were significant predictors of multicultural competency among student affairs advisors.
en
multicultural competence
Dynamic Model of Student Affairs Competence
Multicultural Competence in Student Affairs-Preliminary 2 (MCSA-P2) Scale
Advisor, racial and ethnic student organizations
EXAMINING MULTICULTURAL COMPETENCIES OF STUDENT AFFAIRS PROFESSIONALS WHO ADVISE STUDENT ORGANIZATIONS OF COLOR
Doctoral Dissertation
oai:TheScholarship.intra.ecu.edu:10342/86462021-03-12T19:33:38Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Busireddy, Swetha
2020-06-30T04:40:31Z
2020-09-17T08:01:54Z
2020-06-22
http://hdl.handle.net/10342/8646
Question answering (QA) systems have evolved exponentially over the past few years and have reached a reliable human standard. Attention mechanisms, as well as other methods of deep learning, paved the way for this development. But, because of their single-pass nature, they are incapable of recovering from local maxima matching to incorrect answers. Dynamic coattention network (DCN) is used to answer this issue. But as it has only one layer, the ability of the DCN to write diverse input representations is limited. We proposed a few modifications to DCN to overcome these findings. First, we used a bidirectional long short-term memory network (biLSTM) to encode the question and document. Next, we applied the concept of self-attention to DCN by using multiple coattention layers. This helps the encoder to generate more profuse input representations. Lastly, we combine outputs from these layers; this improves the long-range dependencies. We built a question answering system based on this multiattention DCN and tested on one of our course documents. On Stanford question answering dataset (SQuAD), this system improves the F1 mean on validation to 79.9% from its previous state of art at 75.6%.
A FRAMEWORK FOR QUESTION ANSWERING SYSTEM USING DYNAMIC CO-ATTENTION NETWORKS
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/46892021-03-03T20:56:06Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Hotwani, Komal
2015-02-02T19:27:58Z
2016-05-11T21:42:05Z
2014
http://hdl.handle.net/10342/4689
Cloud outages have become very common in recent years as many companies adopt hosted services. One of the major reasons for cloud outages is the service level for availability, reliability, security not being met as mentioned in a service level agreement. The main purpose of this thesis is to introduce a model to calculate, control and reduce the risk of cloud outages. The objective is also to help consumers make an educated decision and to help them select a right providers. In order to understand the granularity of risk of cloud outage and its impact in the current cloud business, a survey was conducted to support our claim that there is a strong need to calculate the risk associated with a service before signing a service level agreement. Survey responses also helped to prioritize the service level parameters used in our model from the consumer point of view.  Our model considers requirements, priorities, service level parameters, and cost as inputs. This model implements a modified version of a well-known mathematical model, Weighted Product Model (WPM) to compare different providers and to sort the eligible cloud providers. The methodology also uses Value At Risk (VAR) term, which is widely used term in the financial industry. The final output of the model, gives the risk value associated for a service for each parameter. Additionally, the model shows the probabilistic value for occurrence of cloud outage. The resulted information will be helpful for consumers to select a right cloud service provider with a minimum risk of cloud outage. This information will bring visibility of risk between a provider and a consumer helping to reduce the risk of cloud outage after the adoption of cloud service Â
Computer science
Formal Model To Reduce the Risk Of Cloud Outages
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/59552021-03-03T21:13:01Zcom_10342_41com_10342_1col_10342_43
Wang, George
Wang, Yuhong
Gaob, Zhili
2016-09-12T13:00:37Z
2016-09-12T13:00:37Z
2016-04
http://hdl.handle.net/10342/5955
Contrairement aux laitiers de hauts fourneaux qui sont volumétriquement
stables et faciles à utiliser en construction routière notamment,
les laitiers d’aciérie contiennent de la chaux libre non hydratée en proportion
variable qui peut occasionner leur expansion. Ainsi, afin d’envisager
une exploitation maîtrisée de ces laitiers, il est indispensable
d’en évaluer chaque type, selon son origine et les traitements qu’il a
subi, afin de déterminer son instabilité volumétrique et son potentiel
d’expansion.
Or aujourd’hui, en l’absence de critères quantifiés permettant d’orienter
les laitiers d’acierie vers des usages appropriés, ceux-ci demeurent
peu utilisés.
Les Etats-Unis ont sur ce point une longueur d’avance.
Cet article présente les résultats d’études sur l’expansion volumétrique
des laitiers d’acierie visant à élaborer des critères susceptibles de
servir d’indicateurs pour l’utilisation de ces laitiers en tant que
matériaux granulaires.
fr
Steel Slag
Volumetric thermal expansion
Thermal expansion
Construction management
Expansion des laitiers d’aciérie : à l’ouest, il y a du nouveau !
Article
oai:TheScholarship.intra.ecu.edu:10342/45162021-03-03T20:55:59Zcom_10342_7351com_10342_6421com_10342_41com_10342_1com_10342_122col_10342_7360col_10342_42col_10342_124
He, Yuan
2014-08-28T15:02:50Z
2014-08-28T15:02:50Z
2014
http://hdl.handle.net/10342/4516
The Group Mutual Exclusion (GME) problem, introduced by Joung, is a natural extension of the classical Mutual Exclusion problem. In the classical Mutual Exclusion problem, two or more processes are not simultaneously allowed to be in their CRITICAL SECTION, a piece of code where a common resource is accessed. In the GME problem, it is necessary to impose mutual exclusion on different groups in processes in accessing a resource, while allowing processes of the same group to share the resource. The Group Mutual Exclusion problem arises in several applications and is the focus of this thesis. We present an algorithm for the GME problem that satisfies the properties of Mutual Exclusion, Starvation Freedom, Bounded Exit, Concurrent Entry and First-Come-First Served. Our algorithm has [theta] (N) shared space complexity and [omicron] (N)RMR (Remote Memory Reference) complexity. Our algorithm is developed by generalizing the well-known Lamport's Bakery Algorithm for the classical mutual exclusion problem, while preserving its simplicity and elegance. Just like Lamport's Bakery Algorithm, our algorithm has the disadvantage that the token numbers can grow in an unbunded manner. When all shared variables are required to be of bounded size, Hadzilacos presented an algorithm, whose shared space complexity is [theta] (N²) and whose RMR complexity is claimed to be [omicron] (N). Hadzilacos posed as an open problem, the development of a linear time and space algorithm that uses only bounded shared variables and only simple read and write instructions. As a solution to the open problem, Jayanti et al. presented a space efficient adaptation of the above algorithm that uses only [theta] (N) shared space and inherited the claim that the RMR complexity is [omicron] (N) We show that both of these algorithms are of RMR complexity [omega] (N²) and thus demonstrate that both claims are erroneous. So, the open problem posed by Hadzilacos is still open.
Computer science
Distributed algorithm
Distributed system
Group mutual exclusion
Mutual exclusion
Group Mutual Exclusion in Linear Time and Space
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/48082021-03-03T20:56:50Zcom_10342_41com_10342_1col_10342_44col_10342_72
Bryan, Alex
Abdelsalam, Rana
Williams, Charles
Abdelrahman, Mohamed
2015-05-06T13:15:16Z
2020-05-07T08:01:51Z
2015
Bryan, Alex; Abdelsalam, Rana; Williams, Charles; Adbelrahman, Mohamed. (2015). DESIGN OF A FORCE BIOFEEDBACK TOOTH EXTRACTION EDUCATIONAL DEVICE. Unpublished manuscript, Honors College, East Carolina University, Greenville, N.C.
http://hdl.handle.net/10342/4808
The current jaw model used by students from the East Carolina School of Dental Medicine to practice tooth extractions does not accurately simulate the forces in a real human mouth. Originally multiple alternative designs were generated and divided into three different categories: tooth material, full or partial jaw, tooth attachment method, sensor type, and alert system. These alternatives were analyzed and the chosen ones were metal teeth for the tooth material, full jaw for the jaw type, fixed with pivot for the attachment method, strain gages on the forceps for the sensor type, and LED for the alarm system. After further analysis it was decided that the following changes would be made. Instead of having one full jaw, individual teeth were mounted on rectangular prism and cylindrical bases. Strain gages were placed on the side of the rectangular bases to determine bending moments and to the cylindrical bases to determine twisting moments applied to the teeth. The LED alarm system was used alert the user when certain moment thresholds are met. A prototype was built and tested and the design met the functional requirements of the engineering design specifications. Recommendations were provided to make the design more commercially feasible. Recommendations included assembling some of the electrical components in house, sending the models out to be manufactured by a third party, putting the LED lights closer to the model, and mounting the models in a way to better simulate a real extraction.
Engineering
Dentistry
Technology
DESIGN OF A FORCE BIOFEEDBACK TOOTH EXTRACTION EDUCATIONAL DEVICE
Honors Project
oai:TheScholarship.intra.ecu.edu:10342/105362022-04-08T07:16:02Zcom_10342_41com_10342_1col_10342_45
Das, Kanchan
2022-04-08T01:11:04Z
2022-04-08T01:11:04Z
2019-04-01
2455-7749
http://hdl.handle.net/10342/10536
10.33889/ijmems.2019.4.2-022
en_US
Sustainable food supply chain
Resilience criteria
Food collection cooperatives
Integrating Lean, Green, and Resilience Criteria in a Sustainable Food Supply Chain Planning Model
Article
oai:TheScholarship.intra.ecu.edu:10342/61482021-03-03T21:13:59Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Oluwajana, Temitope
2017-05-31T14:13:19Z
2017-05-31T14:13:19Z
2017-04-28
http://hdl.handle.net/10342/6148
The issue of student's perception of safety on their college campus has been a recurring problem. Students have been concerned about their safety on campuses because of the crime and violence that have been occurring on college grounds and in surrounding areas. Some of these crimes include the following: rape, robbery, burglary, dating violence, stalking, sexual assault, and gun violence. Some examples of tragedies that have occurred on college grounds include the Virginia Tech tragedy and the rape of Jeanne Clery at Lehigh University. Lawmakers showed their concern of campus safety by creating and enforcing the Jeanne Clery Act. This research employed the use of a survey and face-to-face semi-structured interviews to determine East Carolina University students' perceptions of safety in campus in relation to crime and violence. The classes where the survey was distributed include ART 1910-Art Appreciation and ITEC 3292-Industrial Safety. The objective of this research work was to identify campus safety issues that were of concern to ECU students as well as students' perceptions of campus security services vis a vis the impact of its presence in the campus. It was understood from the results that students generally believe that ECU is a safe campus. Also, the survey and interview results show that few students use the campus security services. The results also show that females view the campus as less safe when compared to males. Freshmen view the campus as less safe when compared to upperclassmen.
en
Students Perception of Safety in Campus VIS A VIS Crime and Violence- A Case Study of ECU
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/105432022-04-08T07:14:50Zcom_10342_41com_10342_1col_10342_45
Chou, Te-Shun
Pickard, John
Angolia, Mark
2022-04-08T01:14:39Z
2022-04-08T01:14:39Z
2018-02
2166-0123
http://hdl.handle.net/10342/10543
en_US
Internet
IPv6
standards of IP addresses
IPV6 Diffusion on the Internet Reaches a Critical Point
Article
oai:TheScholarship.intra.ecu.edu:10342/60362021-03-03T21:09:25Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Sanchez, Jimi Carmen
2017-01-11T21:48:42Z
2017-08-24T14:50:52Z
2016-12-13
http://hdl.handle.net/10342/6036
Decision Support System (DSS) are at the core of business intelligence systems. Implementation costs for enterprise level Database Management System (DBMS) and DSS average $10,461 for installation costs. This does not include costs associated with database migrations or testing, which can double the cost, nor does this quoted price include the cost of yearly licensing or support agreements. Depending on the software vendor, there may be additional costs associated with using an application cluster, logical and virtual partitioning, data guards, and even costs per processor core. It is easy to see how the cost of operating a database server can grow expensive rapidly. Information Technology (IT) decision makers and software architects need the ability to choose a DBMS to suit their application's needs. To choose the correct DBMS solution a comprehensive and adaptive benchmark is needed. This benchmark must be capable of predicting how the performance of a given system will scale, as well as offer an estimation of cost. A problematic benchmark that is unable to accurately predict these values is worthless and leads to costly software decision mistakes. To continue to be successful and remain competitive in a given industry it is important for organizations to know their customers, target and acquire new markets, and look to future trends. This is where database business intelligence and decision support systems become useful. DSS allow users to data mine critical information about their work-flows, sales history and trends and have the data readily available so that they may make informed decisions and plan future growth. Business intelligence tools and decision support systems provide executive officers and members of management, the tools needed to create complex ad-hoc queries and mine important data. Presently, IT decision makers and software engineers use the TPC-H decision sup- port system benchmark as a guide to determining the optimal hardware and database vendor configurations to utilize for their decision support system. The TPC-H benchmark is a popular decision support system benchmark. In recent years, however, TPC-H has become heavily criticized for its many problems. The issues outlined within this thesis can lead IT decision makers to purchase and implement improper hardware and software solutions. This thesis examines the criticisms and issues of the TPC-H benchmark. Utilizing Amazon Web Services cloud computing power, we evaluate the Star Schema Benchmark (SSB), as an alternative to TPC-H. We successfully identify and demonstrate several previously undefined problems in the TPC-H benchmark. Our results conclude that the SSB not only resolves the issues inherent in TPC-H, and should serve as a replacement for TPC-H.
en
tpc-h
ssb
star schema
benchmark
INVESTIGATING THE STAR SCHEMA BENCHMARK AS A REPLACEMENT FOR THE TPC-H DECISION SUPPORT SYSTEM
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/39372021-03-03T20:56:21Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Masters, Miciah Dashiel Butler
2012-09-04T18:08:20Z
2014-10-01T14:45:53Z
2012
http://hdl.handle.net/10342/3937
We provide a survey of research surrounding the Černy� conjecture. This conjecture concerns finite-state automata that have the property of being "synchronizing." A synchronizing automaton is one for which there exists some input sequence that causes the automaton to transition to some fixed state, irrespective of the state in which it had been before reading that input sequence. The Černy� conjecture states that, if an automaton with n states is synchronizing, then there exists some input sequence of length at most (n - 1)² that synchronizes it. We first survey the basic results that deal with synchronization of finite-state automata. We also study and implement several related algorithms, including Eppstein's greedy algorithm for producing a reset sequence. An analysis of the length of the sequence produced by this algorithm leads to an interesting problem in extremal combinatorics, the solution of which yields an upper bound of (n³ - n)/6 on the length of the sequence. We then investigate a generalization of the Černy� conjecture. Next, we study extremal automata, for which the length of the shortest synchronizing sequence meets the Černy� bound. We then turn our attention to subclasses of automata for which the Černy� conjecture has been proved. Finally, we discuss possible approaches and current efforts to proving the Černy� conjecture, as well as some related problems from the literature.
Computer science
Automata
Černy� conjecture
Deterministic
Finite-state
Synchronizing automata
Synchronizing Automata and the Černy� Conjecture
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/38092021-03-03T20:52:35Zcom_10342_41com_10342_1col_10342_45
Lesko, Charles J. Jr.
2012-04-17T15:04:23Z
2012-04-17T15:04:23Z
2011
Journal of Online Engineering Education; 2:2 p. 1-10
http://hdl.handle.net/10342/3809
Recently, there has been a great deal of attention put toward efforts to integrate teaching methodologies and strategies between face to face and online classrooms looking to maximize learning by combining delivery modalities. Studies point to students not only learning more when online capabilities were added to traditional courses, but also increasing their level of interaction thereby improving the students’ sense of satisfaction with the courses taken. However, these studies tend to isolate deliveries to either all online deliveries or to all on-campus classes and students, without taking into account the more recent movement of blending teaching methods that look to cross over the barriers between online and face to face students.
To meet some of the collaborative requirements for blending instruction, virtually immersive environments are beginning to show promise as an interactive communication media that can facilitate the needs of several communities including e-learning, distance education and corporate training. So the question was posed - what happens when online students are given the opportunity, through the use of virtually immersive technologies, to engage with students attending traditional on-campus sessions? Thus, the purpose of this case study is to evaluate the use of virtually immersive technologies as a platform for the conduct of synchronous and asynchronous classroom activities. This article also presents the framework for conducting an undergraduate level ‘Technology Project Management’ course that includes delivery approaches to students from both online (Distance Education) class offerings and on-campus (Face-to-face) class offerings.
en_US
Virtual worlds
Blended learning
Virtual teaming
Project management
Distance education
Blending on-campus and online experiences through the use of virtually immersive technologies
Article
oai:TheScholarship.intra.ecu.edu:10342/108472022-07-20T07:16:25Zcom_10342_41com_10342_1col_10342_44
Wu, Rui
Li, Jiahao
Ablan, Charles
Guan, Shanyue
Yao, Jason
2022-07-19T15:09:02Z
2022-07-19T15:09:02Z
2021-03
0975 - 8887
http://hdl.handle.net/10342/10847
en_US
gun detection
public safety
machine learning algorithms
Preprocessing Techniques’ Effect On Overfitting for VGG16 Fast-RCNN Pistol Detection
Article
oai:TheScholarship.intra.ecu.edu:10342/63302021-03-03T21:15:01Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Yasrobi, Seyedfaraz
2017-08-09T14:54:45Z
2018-03-14T18:00:42Z
2017-07-26
http://hdl.handle.net/10342/6330
Users' activities produce an enormous amount of data when using popular devices such as smartphones. These data can be used to develop behavioral models in several areas including fraud detection, finance, recommendation systems, and marketing. However, enabling fast analysis of such a large volume of data using traditional data analytics may not be applicable. In-memory analytics is a new technology for faster querying and processing of data stored in computer's memory (RAM) rather than disk storage. This research reports on the feasibility of user behavior analytics based on their activities in applications with a large number of users using in-memory processing. We present a new instantaneous behavioral model to examine users' activities and actions rather than results of their activities in order to analyze and predict their behaviors. For the purpose of this research, we designed a software to simulate user activity data such as users' swipes and taps, and studied the performance and scalability of this architecture for a large number of the users.
en
User behavior
User Behavior Analysis using Smartphones
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/69502021-03-03T21:18:53Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Pradyumn, Mudit
2018-08-14T14:46:35Z
2018-08-14T14:46:35Z
2018-07-17
http://hdl.handle.net/10342/6950
Twitter has over 330 million active monthly users producing roughly 500 million Tweets per day, or 200 billion Tweets a year. Making this one of the largest human-generated opinion data collections. In addition to this major advantage, Twitter generates real-time data, making it possible to gain insights on trending information instantaneously. People post about a wide variety of subjects, including their opinions, feelings, situations, current trends, and products. This makes it a great data source for analyzing the sentiments of people on a variety of subjects. In this study, out of 1025 research papers on Twitter data analytics from 2011-2017, papers from only 20 selected journals were considered for review. They were then classified based on their year of publication, their titles, data mining methods, and application areas. In the course of this study a tool for the Sentiment Analysis of the Twitter data was developed and used to conduct a case study on individuals on marijuana use during pregnancy.
en
Insight
Extraction
Classification
Sentiment Analysis
SYSTEMATIC REVIEW OF LITERATURE USING TWITTER AS A TOOL
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/45172021-03-03T21:10:40Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Shahbazi, Ali
2014-08-28T15:02:50Z
2017-02-07T22:22:34Z
2014
http://hdl.handle.net/10342/4517
In this thesis we introduce a generic security framework for public clouds called Treasure Island Security framework that is designed to address the issues related to cloud computing security and specifically key-management in untrusted domains. Nowadays many cloud structure and services are provided but as an inevitable concomitant to these new products, security issues increase rapidly. Availability, integrity of data, lack of trust, confidentiality as well as security issues are also of great importance to cloud computing users; they may be more skeptical of the cloud services when they feel that they might lose the control over their data or the structures that the cloud provided for them.   Because of deferred control of data from customers to cloud providers and unknown number of third parties in between, it is almost impossible to apply traditional security methods. We present our security framework, with distributed key and sequential addressing in a simple abstract mode with a master server and adequate number of chunk servers. We assume a fixed chunk size model for large files and sequentially distribution file system with 4 separated key to decrypt/encrypt file. After reviewing the process, we analyze the Distributed Key and Sequentially Addressing Distributed file system and it's Security Risk Model. The focus of this thesis is on increasing security in untrusted domain especially in the cloud key management in public cloud. We discuss cryptographic approaches in key-management and suggest a novel cryptographic method for public cloud's key-management system based on forward-secure public key encryption, which supports a non-interactive publicly verifiable secret sharing scheme through a tree access structure. We believe that Treasure Island Security Framework can provide an increased secure environment in untrusted domains, like public cloud, in which users can securely reconstruct their secret-keys (e.g. lost passphrases). Finally, we discuss the advantages and benefits of Cloud Computing Security Framework with Distributed Key and Sequentially Addressing Distributed file system and cryptographic approaches and how it helps to improve the security levels in cloud systems. Â
Computer science
Cryptography
Key-management
PKE
Security
Security in untrusted domain
Treasure Island Security framework : A Generic Security Framework for public clouds
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/128522023-06-05T13:54:18Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Antar, Marwa
2023-06-05T13:54:18Z
2023-06-05T13:54:18Z
2023-05-03
http://hdl.handle.net/10342/12852
Sleep plays a vital role in learning and memory consolidation. Several studies used brain models of sleep deprivation (SD) and insomnia to study the association between sleep deficiency and cognitive decline conditions. SD was found to cause similar, albeit subtle, cognitive decline symptoms displayed by dementia patients affecting attentional functions, decision making, working and long-term memory. This study examines the effect of sleep restriction (SR) on brain networks and utilizes Functional Connectivity (FC) analysis to identify patterns of information processing between different brain regions. It particularly applies weighted phase-lag index (wPLI) to quantify brain signals synchronization levels during a visual oddball paradigm task that evokes event-related potentials (ERPs) associated with face recognition. This study also examines the viability of graph theoretic analysis (GTA), which provides a holistic view on the brain network topology. GTA quantifies the brain connectivity features to assess the global efficiency and local efficiency of information processing, pre- and post- SR intervention. Significant alterations were found in all graph indices mainly in α-, µ- and β- frequency bands due to induced mental fatigue. The obtained results reveal significantly lower local connections (p < 0.05) and lower global efficiency (p < 0.001), particularly in the α- band as a result of mental fatigue, reflecting the impact of sleep loss on attention and memory processing.
en
Functional connectivity
Graph theory
network analysis
sleep deprivation
sleep restriction
cognitive decline
brain network
event-related potential
electroencephalogram
Graph Theoretic analysis of the Human Brain Functional Connectivity Alteration Due to Sleep Restriction
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/64102021-03-03T21:15:27Zcom_10342_41com_10342_1col_10342_43
Zarei, Hamzeh
Kin Peng Hui, Felix
Duffield, Colin
Wang, George
2017-09-11T15:50:38Z
2017-09-11T15:50:38Z
2017-07
2210-8505
http://hdl.handle.net/10342/6410
In large public infrastructure projects, political risks due to the power imbalance between central and delivery agencies are often overlooked or underestimated. The primary motive of the delivery agency in distorting information for political gains should be deemed a risk that creates uncertainty for large projects planning the outcome. In this study, seven large infrastructure projects in the state of Victoria, Australia are examined through a workshop involving key stakeholders who had played active roles in these projects. The findings revealed that power asymmetry between central and delivery agencies exist and would lead to optimism bias, which in turn creates uncertainty and risk of overpromising in the business case. Power asymmetry exist in large infrastructure projects because the central agencies usually only have the responsibility but not the skill set needed to measure the robustness of the business case. These types of political risks are difficult to quantify and even detect. This paper recommends a few managerial strategies that have referential values and/or can be used to mitigate and circumvent this risk.
en
Political risks
Power asymmetry
Infrastructure projects
The Risk of Power Imbalance in Project Delivery: A Study of Large Victorian Public Infrastructure Projects
Article
oai:TheScholarship.intra.ecu.edu:10342/76502021-03-03T21:26:09Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Lakshminarayana, Deepthi Hassan
2020-02-04T15:24:10Z
2020-06-01T08:01:52Z
2019-11-27
http://hdl.handle.net/10342/7650
With the growing rate of cyber-attacks, there is a significant need for intrusion detection systems (IDS) in networked environments. As intrusion tactics become more sophisticated and more challenging to detect, this necessitates improved intrusion detection technology to retain user trust and preserve network security. Over the last decade, several detection methodologies have been designed to provide users with reliability, privacy, and information security. The first half of this thesis surveys the literature on intrusion detection techniques based on machine learning, deep learning, and blockchain technology from 2009 to 2018. The survey identifies applications, drawbacks, and challenges of these three intrusion detection methodologies that identify threats in computer network environments. The second half of this thesis proposes a new machine learning Model for intrusion detection that employs random forest, naive Bayes, and decision tree algorithms. We evaluate its performance on a standard dataset of simulated network attacks used in the literature, NSL-KDD. We discuss preprocessing of the dataset and feature selection for training our hybrid model and report its performance using standard metrics such as accuracy, precision, recall, and f-measure. In the final part of the thesis, we evaluate our intrusion model against the performance of existing machine learning models for intrusion detection reported in the literature. Our model predicts the Denial of Service (DOS) attack using a random forest classifier with 99.81% accuracy, Probe attack with 97.89% accuracy, and R2L attack with 97.92% accuracy achieving equivalent or superior performance in comparison with the existing models.
en
security
network
deep learning
NSL KDD
algorithms
classifiers
Intrusion detection using machine learning algorithms
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/128432023-06-05T13:52:41Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Glosson, Gabriel
2023-06-05T13:52:40Z
2023-06-05T13:52:40Z
2023-04-28
http://hdl.handle.net/10342/12843
Current methods of producing clean water are not capable of meeting growing demands. One method of producing clean water is through a process called desalination, which is the process of removing salt and other minerals from seawater. However, traditional desalination methods can be energy-intensive and generate significant amounts of waste. To help address these issues, a hybrid wave-to-water desalination system that combines reverse osmosis (RO) with supercritical water desalination (SCWD) can produce freshwater from seawater. SCWD treats the brine produced by RO, while RO produces freshwater at a lower energy cost. The system utilizes an oscillating surge wave energy converter (OSWEC) to harness the energy of ocean waves to directly pressurize the seawater feeding into the RO system. Using ocean waves as an energy source makes the system renewable and reduces the carbon footprint of the desalination process. This thesis presents the development of a simulation for a small-scale zero-waste desalination system powered by off-grid renewable energy. The model of the system was developed using MATLAB Simulink along with WEC-Sim. A sensitivity analysis was performed on the model to determine the optimal configuration of key system parameters. The sensitivity analysis was conducted using an irregular wave pattern with a significant wave height of 0.117 m and a period of 1.68 s. The parameters investigated in the sensitivity analysis were the system's power take-off (PTO) volumetric displacement, accumulator size, and RO membrane type. The results of the sensitivity analysis showed that the optimized system was the one that used an SW30HR-380 RO membrane, a PTO volumetric displacement of 1975 cm^3/rad, and a 10-gallon accumulator. The average water production rate for the optimized system was 32.644 gpm.
en
Desalination
Renewable Energy
Wave Energy Converter
Wave-to-Water
Reverse Osmosis
Supercritical Water Desalination
Zero-Liquid-Discharge
Sensitivity Analysis
Simulating and Optimizing a Zero-Waste Wave-To-Water Desalination System
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/60452021-03-03T21:09:51Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Yadranjiaghdam, Babak
2017-01-11T22:12:15Z
2017-08-24T14:50:52Z
2016-12-13
http://hdl.handle.net/10342/6045
Twitter is an online social networking service with more than 300 million users, generating a huge amount of information every day. Twitter's most important characteristic is its ability for users to tweet about events, situations, feelings, opinions, or even something totally new, in real time. Currently there are different workflows offering real-time data analysis for Twitter, presenting general processing over streaming data. This study will attempt to develop an analytical framework with the ability of in-memory processing to extract and analyze structured and unstructured Twitter data. The proposed framework includes data ingestion and stream processing and data visualization components with the Apache Kafka messaging system that is used to perform data ingestion task. Furthermore, Spark makes it possible to perform sophisticated data processing and machine learning algorithms in real time. We have conducted a case study on tweets about the earthquake in Japan and the reactions of people around the world with analysis on the time and origin of the tweets.
en
Real-time
Big Data
Twitter
DEVELOPING A REAL-TIME DATA ANALYTICS FRAMEWORK FOR TWITTER STREAMING DATA
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/85672021-03-03T22:08:32Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Butler, Rhonda R
2020-06-23T18:03:37Z
2020-06-23T18:03:37Z
2020-06-22
http://hdl.handle.net/10342/8567
Two issues continually plaguing the software industry are software size calculation and project effort estimation. Incorrect estimates may lead to inappropriate allocation of resources (people), shortage of time, insufficient funds, and possibly project failure. The purpose of this study is to explore current methods of software effort estimation and the applicability to low-code application development. The study seeks to answer the research question: Can the current methods be used to estimate effort for applications built with low-code platforms? The goal is to analyze some of the popular estimation methods to see if they can be applied to low-code application development.
ESTIMATING EFFORT FOR LOW-CODE APPLICATIONS
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/38892021-03-03T20:52:41Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Vail, John Edwards
2012-05-20T15:25:15Z
2012-05-20T15:25:15Z
2012
http://hdl.handle.net/10342/3889
Small businesses account for over fifty percent of the Gross National Product of the U.S. economy; and the security of their information systems is critical for them to operate, compete, and remain profitable. While many security studies have been conducted and reported on enterprise scale organizations, similar research on small businesses in the U.S. is limited. One small business was evaluated by an information security audit to determine if its information resources and network were adequately secure, and will be used as a test case to identify an approach a typical small business may take to secure their networks and data to avoid unnecessary liability exposure. By examining the specific risk factors in this case study, the author believes parallels can be drawn by other small businesses as a starting point for examining their own risk factors. Additionally this study provides a series of proposed mitigation processes to improve the small businesses' network security that can be adopted by other small businesses in like circumstances. The mitigation processes are specifically tailored to the small business industry itself, as opposed to a larger organization that has a greater exposure to risk vulnerability and that also has larger asset pools from which to secure their networks.  The method utilized for this research was qualitative in nature, using a form of Participatory Action Research (PAR). This approach was most appropriate in that it allows the researcher to act in partnership with the small business to attempt to affect social change that will help in securing the small business's information resources. An information security audit was performed on a small business to identify actual and potential threats, and an electronic questionnaire was distributed to the employees to gauge their individual perspectives of the clarity and comprehensibility of the business's security policy, the consequences of violations to the company's policy, how well the company's policy is disseminated and tracked for compliance, and if they have knowledge of steps to be taken in response to an incident or disaster. There were four objectives of this study. The first objective was to evaluate a small business's information security posture. The second objective was to determine if the small business had experienced any information technology security incidents. The third objective was to evaluate whether the incidents were caused by a lack of a policy, standard or procedure; an ineffective policy, standard or procedure; a lack of training and education; or a reluctance to enforce or monitor adherence to established policy, standards, or procedures. And the fourth objective was to recommend to the small business any changes or additions that would reduce the small business's exposure to information security threats, risks and vulnerabilities through effective information security risk management. Â
Information technology
Information security
Information technology management
Information technology policy
IT due diligence
IT governance
Small business information security
Small Business Information Security
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/37872021-03-03T20:52:40Zcom_10342_41com_10342_1col_10342_45
Lesko, Charles J. Jr.
2012-03-28T13:38:45Z
2012-03-28T13:38:45Z
2011-09
Journal of Management and Strategy; 2:3 p. 25-34
http://hdl.handle.net/10342/3787
10.5430/jms.v2n3p25
Communication management plans are used to determine not only who needs what information but also how that information will be collected and transmitted. Now two evolving technologies are looking to drive project planners to develop new approaches and methods for planning communications in the coming years. The first of these
technologies, the Semantic Web, is becoming a driving force in how computers are making web content available to its users. The second technology, Web three-dimensional (3D) focuses on web-based content presentation by providing a
rich 3D Web-centric environment for users to access information and interact with other users. This effort discusses the advent of the Semantic Web and Web 3D technologies and identifies many of the new planning considerations driving project information collection and analysis. The planning considerations for these two technologies are also
discussed to aid in the framing of a new approach to project communications planning.
en_US
Communications planning
Semantic web
3-D web
Project management
Virtual worlds
A New Approach to Communications Management Planning Through 3D Web and Semantic Web Technologies
Article
oai:TheScholarship.intra.ecu.edu:10342/70322021-03-03T21:19:55Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Davis, Matthew C
2019-02-12T16:01:02Z
2019-02-12T16:01:02Z
2018-11-30
http://hdl.handle.net/10342/7032
Software Engineers are familiar with mutable and immutable object state. Mutable objects shared across modules may lead to unexpected results as changes to the object in one module are visible to other modules sharing the object. When provided a mutable object as input in Java, it is common practice to defensively create a new private copy of the object bearing the same state via cloning, serializing/de-serializing, specialized object constructor, or third-party library. No universal approach exists for all scenarios and each common solution has well-known problems. This research explores the applicability of concepts within the Computer Engineering storage field related to snapshots. This exploration results in a simplified method of memory snapshotting implemented within OpenJDK 10. A novel runtime-managed method is proposed for declaring intent for object state to be unshared within the method signature. Preliminary experiments evaluate the attributes of this approach. A path for future research is proposed, including differential snapshots, alternative block sizes, improving performance, and exploring a tree of snapshots as a foundation to reason about changes to object state over time.
en
Snapshot
runtime
mutability
HotSpot
clone
serialize
copy constructor
Applying Mutable Object Snapshots to a High-level Object-Oriented Language
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/87962021-08-02T14:43:22Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Vilkomir, Aleksei
2020-12-18T15:45:57Z
2020-12-18T15:45:57Z
2020-11-16
http://hdl.handle.net/10342/8796
Machine learning is becoming an increasingly important part of many domains, both inside and outside of computer science. With this has come an increase in developers learning to write machine learning applications in languages like Python, using application programming interfaces (APIs) such as pandas and scikit-learn. However, given the complexity of these APIs, they can be challenging to learn, especially for new programmers. To create better tools for assisting developers with machine learning APIs, we need to understand how these APIs are currently used. In this thesis, we present a study of machine learning API usage in Python code in a corpus of machine learning projects hosted on Kaggle, a machine learning education and competition community site. We analyzed the most frequently used machine learning related libraries and the sub-modules of those libraries. Next, we studied the usage of different calls used by the developers to solve machine learning tasks. We also found information about which libraries are used in combination and discovered a number of cases where the libraries were imported but never used. We end by discussing potential next steps for further research and developments based on our work results.
en
Machine Learning API
Machine Learning exploratory
An Empirical Exploration of Python Machine Learning API Usage
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/90892021-10-01T14:12:47Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Williams, Baylea
2021-06-14T01:53:53Z
2021-06-14T01:53:53Z
2021-04-20
http://hdl.handle.net/10342/9089
The research conducted in this thesis was to serve as a baseline on which human demographics are most likely to be able to be predicted through touch screen interactions. In addition, it served as a way of finding which machine learning models are best suited to be applied to a larger scale experiment of this phenomena. We were able to reliably predict both age and race of participants and in the meantime show that the best machine learning models used was Random Forest Decision Trees and Naïve Bayes producing a higher classifier of accuracy than other classifiers tested. While the sample size used during this study was small, due to the ongoing Covid-19 pandemic, the results of this study indicate that research in this area is worthy of significant exploration.
en
mobile devices
demographics
gaming
demographics
psychology
predictive
Tactile Demographics: Predicting Demographic Information Using Touch Data from Mobile Devices
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/63582021-03-03T21:15:18Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Tucker, Bryent
2017-08-09T16:00:13Z
2017-08-09T16:00:13Z
2017-07-18
http://hdl.handle.net/10342/6358
Cardiac motion can be monitored non-invasively for the assessment of cardiovascular function by using medical imaging systems and motion tracking algorithms. Existing tracking approaches require a priori understanding of the non-rigid motion of the target system, which could change over multiple cardiac cycles and lead to tracking failures. The purpose of this research is to develop the algorithm and software, with computer vision techniques, to continuously track the motion of a user-defined region of the heart images. The proposed algorithm improves upon existing techniques because it does not require an underlying motion model, it quantifies the quality of tracking, and it can recover from a failed tracking estimate. The motion estimation of a non-rigid system will be done by a piecewise tracking approach that breaks up the region of interest into several small segments (patches), which can be approximated with interconnected pseudo-rigid segments. These segments will be initialized based on two criteria: 1) motion within a segment must follow the pseudo-rigid body model; and 2) motion in neighboring segments must be similar to each other. Segments are subsequently tracked as pseudo-rigid bodies, and the criteria described above are also used to detect failures in tracking. If a failure were to occur, the tracking algorithm will be reinitialized automatically. This algorithm was shown to be accurate and efficient, and has been tested on several heart motion data sets.
en
Biomedical Engineering
Heart Motion Tracking
Development of a Heart Motion Tracking System using Non-invasive Imaging Data
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/106022022-05-05T13:21:42Zcom_10342_41com_10342_1col_10342_10600col_10342_45
Sutton, Raegine
2022-05-05T13:21:42Z
2022-05-05T13:21:42Z
2022-04-19
http://hdl.handle.net/10342/10602
Digital piracy is the unauthorized illegal copying and distribution of digital copyrighted content. Copyrighted content refers to Intellectual Property. Property that consist of intangible creations of the human intellect. Individuals that download songs, movies, and art from a website for free are committing digital piracy. For instance, watching a video on YouTube and transferring the URL to a MP3 converter on a different website and converting it to an MP3 music file for free is illegal. There are numerous names for digital piracy depending on the specific type of content being copied and distributed however this paper will examine digital piracy as a whole. The paper will begin with introducing digital piracy. Then the psychological aspects of digital piracy, why people choose to do it. The paper will then discuss the dangers and attacks associated with digital piracy. Lastly, it will discuss the consequences, solutions, and end with a conclusion on digital piracy.
en_US
Digital Piracy, crime, fines, attacks, malware
Is Digital Piracy a Crime?
Working Papers
oai:TheScholarship.intra.ecu.edu:10342/76302021-12-01T09:01:54Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Pala, Venkatesh Reddy
2020-02-04T15:21:31Z
2021-12-01T09:01:54Z
2019-09-16
http://hdl.handle.net/10342/7630
Most western languages have witnessed the power of Artificial Intelligence (AI) in one other form. Primary fact for this achievement is due to the efforts of several researchers contributing to the field of computational linguistics. However, there are many languages in the World which has a great history and abundant literature but not many research activities due to many factors such as lack of motivation, non- availability of open-source corpora and so on. Telugu is one such language where there is a lack of efforts towards the digitization of language. The focus of this research is to extract text from the images to produce corpora for enabling computational linguistics and also to conserve the literature. Deep Learning with Neural Networks has proven solutions in the same domain.Optical Character Recognition is the solution adopted by western languages for digitization. However the same cannot be applied towards Telugu due to the complexity of scripts and the ambiguity in dialects. To address this issue, in this research we built a neural network system that can be adapted later for any such languages like Telugu. By adapting neural networks in this research we achieved an efficiency of 90 percent. Segmentation of characters is taken care by neural networks while we only specified the segmentation on word level. A comparative study of the system we developed and commercial API's is made and our system is proven to be more accurate.
en
OCR
Text Extraction
Indic Scripts
TEXT EXTRACTION FROM IMAGES USING NEURAL NETWORKS
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/74742021-08-01T08:01:55Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Khademorezaian, Kasra
2019-08-22T12:40:28Z
2021-08-01T08:01:55Z
2019-07-22
http://hdl.handle.net/10342/7474
Online stores have created more opportunities for firms to offer different products and services to their customers. These online stores produce a tremendous amount of data that serve different purposes, including revenue prediction. Online stores usually keep a log of customers that visit their website that include session information, the products they showed interest, IP, location, and device information. These features are explored massively for studies such as customer churn and recommender systems but using location information for prediction is not explored as much. The first part of this thesis systematically reviews the articles on revenue prediction with respect to their publication date, application area, evaluation criteria, and technique for prediction that provides a good understanding of already conducted research, the evolution of the topic over the years, and possible research opportunities. The second part focuses on the prediction of Google store revenue data. Using linear regression as a baseline, it evaluates the predictive power of different machine learning techniques, including gradient boosting, support vector regression, and neural networks. The data is collected from Google Analytic demo account that contains 903,653 observation and 55 features. The goal of this study is to predict the total transaction per user from December 1st, 2018 to January 31st, 2019 and in order conduct performance analysis between different prediction techniques.
en
Revenue Prediction
CUSTOMER REVENUE PREDICTION FROM GEOGRAPHICAL DATA
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/67432021-03-03T21:17:38Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Anderson, David
2018-05-25T17:30:37Z
2018-05-25T17:30:37Z
2018-04-19
http://hdl.handle.net/10342/6743
PHP is a common language used for creating dynamic websites. These websites often include the use of databases to store data, with embedded SQL queries constructed within the PHP code and executed through the use of database access libraries. One of these libraries is the original MySQL library that, despite not being supported in current versions of PHP, is still widely used in existing PHP code. As a first step towards developing program comprehension and transformation tools for PHP systems that use this library, this research presents a query modeling tool that models embedded SQL queries in PHP systems and an empirical study conducted through analysis of these models. A main focus of this study was to establish common patterns developers use to construct SQL queries and to extract information about their occurrences in actual PHP systems. Using these patterns, the parts of queries that are generally static, and the parts that are often computed at runtime were extracted. For dynamically computed query parts, we also extracted data about which PHP language features are used to construct them. Finally, information about which clauses most often differ based on control flow was extracted as well as counts for how often each SQL query type and SQL clause is used in practice. We believe this information is useful for future work on building program understanding and transformation tools to renovate PHP code using database libraries.
en
MySQL
Program Analysis
Database
Modeling
Empirical Software Engineering
Modeling and Analysis of SQL Queries in PHP Systems
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/60712021-03-03T21:13:19Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Braddy, Shawn
2017-02-22T13:24:38Z
2017-02-22T13:24:38Z
5/16/2016
http://hdl.handle.net/10342/6071
Social media has become an integral part of today's society and has continued to grow tremendously throughout the world. People communicate through social media constantly and social media has become a pivotal place for people to gather information related to things that they are interested in. It is also a place for people to express their views to friends, family and followers with just one click of a button immediately. Naturally in society there are a wide range of views encompassing an unlimited amount of subjects. Some of these views can be deemed negative, positive, neutral or maybe even other. My research will attempt to identify the sentimental impact that different views in the form of social interactions online, can have on the people who are watching, listening and reading them. Furthermore, I will research how the integration of neutral and positive statements can stimulate positive and productive conversation amongst a community on a small scale as well as a large scale. Analysis of this will be quantified by changes in the overall reaction of the various communities time following intervention. The main contribution of this thesis is to show a correlation between statements with positive or neutral sentiment and a trend towards positive or neutral responses after moderation. Events of large discussion or interest will be evaluated to ensure a significant amount of data is available for analysis. In order to integrate the social aspect into the equation, software will be explored and evaluated or built and then utilized to connect participants and extract the appropriate metrics.
Sentiment analysis
Sentiment analysis study with an emphasis on the integration of different statement polarities and the evaluation of the resulting sentiments
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/37532021-03-03T20:52:34Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Rowe, Fred
2012-01-18T20:15:26Z
2012-01-18T20:15:26Z
2011
http://hdl.handle.net/10342/3753
Cloud computing has garnered a great deal of interest in the past few years. The availability of on-demand computational power is presumed to provide substantial IT infrastructure cost-savings, partially through the reduction of maintenance and administration costs. However, in order to take advantage of these savings, it is often required that legacy applications be rewritten at least partially, if not in entirety, to operate in these environments. As a part of re-architecting these legacy assets for cloud computing environments, the software architect may also consider application modifications providing other cost benefits which may have been cost prohibitive to implement in a more traditional computing environment. Although not a new technology, the combination of parallel computing and cloud environments can offer a number of benefits to many application categories if the cost of making the necessary changes to the application and setting up and maintaining the environment can be justified.  This thesis explores the use of cloud computing to provide a flexible deployment environment in which to run a migrated legacy application using one of the popular parallel computing frameworks. The ability to easily and rapidly configure and deploy hardware and software to create a cloud capable of executing applications with parallelism combines the benefits of these technologies in a powerful manner. In order to make an informed decision about the potential benefits of such an environment, the owner of those assets needs to be able to balance any savings against any costs incurred to enable existing corporate business applications to run in such an environment.  An approach to performing such an analysis is presented in this thesis. To provide some quantitative means of measuring benefits, benchmark results of the computational resources required by the application in the different environments are provided. Additionally, offsetting costs such as software re-architecting and refactoring are considered. Â
Computer science
Cloud
Hadoop
Legacy
Migration
Parallel
Software
Migrating Legacy Software Applications to Cloud Computing Environments : A Software Architect's Approach
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/117802022-11-22T08:16:46Zcom_10342_41com_10342_1col_10342_43
Massarra, Carol C.
Orooji, Fatemeh
2022-11-21T19:41:37Z
2022-11-21T19:41:37Z
2022-01-24
2297-3362
http://hdl.handle.net/10342/11780
10.3389/fbuil.2021.745914
en_US
wind hazard
cost effectiveness
resilient construction
Generalized Cost-Effectiveness of Residential Wind Mitigation Strategies for Wood-Frame, Single Family House in the USA
Article
oai:TheScholarship.intra.ecu.edu:10342/95782022-02-01T08:15:45Zcom_10342_7351com_10342_6421com_10342_41com_10342_1col_10342_9479col_10342_44
Sylcott, Brian
Lin, Chia-Cheng
Williams, Keith
Hinderaker, Mark
2022-01-31T16:24:40Z
2022-01-31T16:24:40Z
2020-10-11
http://hdl.handle.net/10342/9578
10.2196/24950
Accurately measuring postural sway is an important part of balance assessment and rehabilitation. Although force plates give accurate measurements, their costs and space requirements make their use impractical in many situations. The work presented in this paper aimed to address this issue by validating a virtual reality (VR) headset as a relatively low-cost alternative to force plates for postural sway measurement. The HTC Vive (HTC Corporation) VR headset has built-in sensors that allow for position and orientation tracking, making it a potentially e?ective tool for balance assessments. Participants in this study were asked to stand upright on a force plate (NeuroCom; Natus Medical Incorporated) while wearing the HTC Vive. Position data were collected from the headset and force plate simultaneously as participants experienced a custom-built VR environment that covered their entire field of view. The intraclass correlation coefficient (ICC) was used to examine the test-retest reliability of the postural control variables, which included the normalized path length, root mean square (RMS), and peak-to-peak (P2P) value. These were computed from the VR position output data and the center of pressure (COP) data from the force plate. Linear regression was used to investigate the correlations between the VR and force plate measurements. Our results showed that the test-retest reliability of the RMS and P2P value of VR headset outputs (ICC: range 0.285-0.636) was similar to that of the RMS and P2P value of COP outputs (ICC: range 0.228-0.759). The linear regression between VR and COP measures showed significant correlations in RMSs and P2P values. Based on our results, the VR headset has the potential to be used for postural control measurements. However, the further development of software and testing protocols for balance assessments is needed.
postural sway
virtual reality (84)
force plate
center of pressure
Investigating the Use of Virtual Reality Headsets for Postural Control Assessment: An Instrument Validation Study
Article
oai:TheScholarship.intra.ecu.edu:10342/59402021-03-03T21:09:42Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Dellana, Ryan
2016-08-26T12:49:02Z
2016-08-26T12:49:02Z
2016-07-25
http://hdl.handle.net/10342/5940
This thesis introduces back-projective priming, a computer vision technique that synergistically fuses object recognition and pose estimation by augmenting 3D models with geometric constraints. It also enables the use of image features too indistinct for use by other model fitting algorithms such as geometric hashing. To efficiently accommodate features that do not provide a scale attribute, we've developed a "match pair" finding heuristic called second-order similarity that reduces model fitting time complexity from a worst case of O(N^2) to O(N*Log(N)). An object recognition problem that is simple, practical, and well explored by other researchers is the problem of locating electrical outlets from the vantage point of a mobile robot. To demonstrate the relative merits of back-projective priming, we use it to build a system capable of locating generic electrical outlets in unmapped environments. Compared to our baseline algorithm, back-projective priming is shown to provide superior sensitivity when dealing with the challenges of low contrast, perspective distortion, partial occlusion, and decoys.
en
Object Recognition
Pose Estimation
Perception
Robotics
Back-Projective Priming: Toward Efficient 3d Model-based Object Recognition via Preemptive Top-down Constraints
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/59562021-03-03T21:08:58Zcom_10342_41com_10342_1com_10342_2com_10342_6421col_10342_43col_10342_8858col_10342_6422
Wang, George
2016-09-12T13:11:59Z
2016-09-12T13:11:59Z
2016-04
http://hdl.handle.net/10342/5956
These items are part of a grant project between Dr. Wang, Department of Construction Management, East Carolina University, and The North Carolina Department of Transportation. The grant project will run from August 1, 2016- July 31, 2018.
Concrete Aggregate
North Carolina Department of Transportation
Eastern North Carolina
Using Recycled Concrete Aggregate in Non-Structural Concrete on NCDOT Projects in Eastern NC
Other Scholarly Work
oai:TheScholarship.intra.ecu.edu:10342/60042023-07-31T10:42:15Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Prateek, Prerna
2017-01-11T19:59:06Z
2017-01-11T19:59:06Z
2016-08-29
http://hdl.handle.net/10342/6004
Online shopping has developed in parallel with the Internet, and Recommendation Systems have played a pivotal role in its growth. The recommendations are usually provided in two ways: Content-based Filtering and Collaborative Filtering. Both forms of recommendations face the problem of Cold-Start due to an initial lack of information. To overcome this issue, Image-based Recommendation Systems are introduced in order to allow the users to locate products based on similarity of images when purchasing products in categories such as: clothes, shoes, home-decor, kitchen and dining utilities, jewelry, and accessories by mostly viewing images. In this thesis, a Hybrid Model of displaying similar images to that of the product being viewed was developed using Deep Features and Description-based Models. The Hybrid Model displayed a set composed of all images that belong to both Deep Features and Description-based Models. Implementation and comparison of results were performed on 100,000 images of SBU Captioned Photo Dataset.
en
Image Retrieval
Neural Network
INTELLIGENT MODEL FOR IMAGE-BASED RECOMMENDATION SYSTEM
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/46902021-03-03T21:10:48Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Samavatian, Arman
2015-02-02T19:27:59Z
2017-02-07T22:22:35Z
2014
http://hdl.handle.net/10342/4690
In this thesis we provide a method for finding the fingering of a music piece on any type of guitar using a hand model. Adapting to the real world conditions by deploying a model of the user's hand, and considering the constraints of the guitar and the music notes is what makes our method more realistic. We have modeled the movements of the user's hand in such a way that the thumb does not play any role, and the movements of the other four fingers are modeled using a set of kinematics equations. We use two sets of constraints derived from the guitar and the music notes in order to include the playing techniques, which are required by the music piece and the guitar. The guitar is considered to be a separate entity in our model having its own properties, resulting in a method independent of the type and tuning of the instrument. Since we are using the hand model for generating the fingering of the music piece, the results of the method are gestures generated for the notes, and the final outcome will be an animation for the entire sheet of music. Â
Computer science
Gesture prediction
Human-computer interactions
Music performance
Gesture Prediction Model for the Guitar Fingering Problem
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/40942021-03-03T20:52:36Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Hardison, Dylan
2013-01-15T12:41:50Z
2013-01-15T12:41:50Z
2012
http://hdl.handle.net/10342/4094
Evaluating supervisor competency levels has been a management challenge since the beginning of supervisory roles in the construction process. Supervisors perform a critical role in the workplace with respect to workplace safety and health. Supervisors are the driving component of the operational aspects of management systems and often convey messages from upper level management directly to line level work force. As a supervisor serves as a liaison for the line level work force, it is vitally important supervisors have a clear understanding of his roles and responsibilities within his organization. As upper level management strives to improve the safety record of an organization, the supervisor must be valued as a key component of an organizations struggle to help establish a proactive safety culture. The issue presents itself when the true level of supervisor competency cannot be determined by management. The purpose of this paper is to identify the key knowledge-based competencies that are suggested to be the most important to the construction supervisor with respect to improving construction site safety performance. Â
Management
Competency
Occupational
Safety
Supervisors
Knowledge-Based Competencies Necessary for the Frontline Construction Supervisor : Improving Safety through Knowledge
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/67322021-03-03T21:17:47Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Pandit, Bigyan
2018-05-25T17:24:04Z
2018-05-25T17:24:04Z
2018-05-02
http://hdl.handle.net/10342/6732
Knowledge discovery from large data for system security management and threat detection have been a complex task due to large number of users and the dynamic nature of distributed systems. Healthcare organizations as a sensitive application domain serve a large community of users with different roles performing different sets of tasks. It is a complex process for one to one monitoring of all user's interactions to maintain a secure system. Thus, we need a complex system capable of handling and monitoring user's actions closely. To solve this issue, we propose a system that considers user's real-time behavioral activities and their predefined workflows based on their roles. We record system access log to capture users run-time information and apply data mining techniques to extract the common behavior patterns. These common behavior patterns help to analyze the common activities within the system. Adding knowledge base of workflow helps to make the system more robust by predefining the set of actions the user can perform. A search based engine is then applied to common behavior knowledgebase and workflow knowledgebase to discover the hidden knowledge behind user's interaction with the system. We construct a Petri Net of workflow to support the proposed architecture and validate the major findings using various healthcare scenarios in Prom tool. This thesis presents a knowledge driven decision support system that effectively assists the system administrator to get a deep insight into the user behavior, track insecure activities and redefine existing processes. The illustrative case study is an indication that it is both feasible and effective.
en
Pattern
Action
Event
Knowledgebase
Workflow
Behavior
Petri Net
Thesis - generating knowledgebase of common behavior and workflow patterns for secure systems
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/106412023-10-12T18:55:36Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Ithipathachai, Von
2022-06-09T19:06:58Z
2022-06-09T19:06:58Z
2022-04-26
http://hdl.handle.net/10342/10641
User queries on Stack Overflow commonly suffer from either inadequate length or inadequate clarity with regards to the languages and/or tools they are meant for. Although the site makes use of a tagging system for classifying questions, tags are used minimally (if at all). To investigate the impact of tags in the quality of results returned by the queries, in this research we propose a new query expansion solution. Our technique assigns tags to queries based on how well they match the queries' topics. We evaluated our technique on eight sets of queries categorized by overall length and programming language. We examined the retrieval results by adding varying numbers of tags to the queries, and monitored the recall and precision rates. Our results indicate that queries yield considerably higher recall and precision rates with extra tags than without. We further conclude that tags are a particularly effective means of enhancement when the original queries do not already return sufficient yields to begin with.
en
StackOverflow Analysis
Tag Recommendation
Query Reformulation
Analysis of the Impact of Tags on Stack Overflow Questions
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/46612021-03-03T20:56:23Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Cooke, Michelle
2015-02-02T19:25:48Z
2015-02-02T19:25:48Z
2014
http://hdl.handle.net/10342/4661
The prevalence of low back pain in industry has contributed to employee days away from work and therefore decreased production. Low back pain is even more pervasive in industries where the work requires the employee to lean forward while performing the task. Leaning forward is natural for tasks that require visual acuity and manual manipulation. Chairs that are usually provided have backrests with lumbar support; however, leaning forward makes the utilization of the backrests insufficient or non-existent. This study explores and examines if a "front-rest" (as opposed to a backrest) provides better support for the employees during the performance of their tasks. Two groups of subjects were used to test a chair that had a backrest and front-rest feature. Each group had 15 men and 15 women whose anthropometric measurements were taken. All subjects completed questionnaires on their comfort in the chair. The subjects then worked on a jigsaw puzzle for an hour, and then completed the same questionnaires. The heart rate of the subjects was monitored throughout the activity. The differences between the before and after results on the questionnaires and heart rate were analyzed statistically. The results showed no difference between the front-rest group and the backrest group. There were trends in the data that indicate more research is needed, and best practices were used. This study represents a baseline that can be used for further study into the issue of low back pain in forward leaning tasks. Â
Occupational health
Back pain
Chair evaluation
Cumulative trauma disorders
Ergonomics
Occupational safety
Seated work
An Ergonomic Evaluation of a Front Support Chair for Forward Leaning Seated Tasks
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/76422021-03-03T21:26:07Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Sommer, Stella Julia
2020-02-04T15:23:09Z
2020-02-04T15:23:09Z
2019-12-10
http://hdl.handle.net/10342/7642
For this thesis, eleven published case studies of laboratory incidents that involved hazardous chemicals and occurred at primary educational and academic institutions were compared. The important information on the incident settings was used to construct bowtie diagrams. This visual method served as a helpful tool to find similarities and differences of the incidents. Common themes between the different cases were lack of supervision, lack of training, deviation from established procedures, and an inadequate or delayed emergency response. Failing barriers provided several pathways for the incidents to occur. Therefore, hierarchical risk management models could not adequately accommodate dynamic teaching environments. The results of this project show that primary educational and academic facilities need to make improvements to their risk management systems and work operations. Laboratory incidents continue to occur at a high frequency. Therefore, effective methods on how to teach chemical health and safety and how to communicate occupational risk need to be developed.
en
Bowtie Diagrams
Incidents in Educational and Academic Chemistry Laboratories: A Comparative Case Study Project
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/106782023-11-02T16:10:26Zcom_10342_41com_10342_1com_10342_122col_10342_44col_10342_124
Harr, David D
2022-06-14T02:29:23Z
2023-05-01T08:01:58Z
2022-05-02
http://hdl.handle.net/10342/10678
Innovation does not come about by random chance but is intentionally cultivated by the efforts of a designer. Many strategies exist for approaching design, ranging from the instinctual and intuitive to the more technical or analytical methods. When it comes to design, engineers are continually striving to improve the effectiveness of the design process. One area of the engineering design process deserving of attention is the ideation phase. Ideation refers to the brainstorming and idea generating activities that usually happen early in the design process. When faced with a problem engineers work to gather as many potential solutions as possible. Having a large body of initial ideas helps designers converge on an optimum final solution. Engineers have developed numerous analytical ideation tools to guide cognitive design processes and increase ideation productivity. This research investigates the effects of enhancing conceptual design tools in accordance with recommendations from the field of cognitive science. Pedagogy and learning theory literature frequently advocate for the use of multimodal representation. This refers to using multiple sensory avenues like text, sound, and visuals to communicate more effectively. A common application of this multimodal principle is to supplement text with visuals. This research investigates the impact of such a recommendation within the context of design ideation. An experiment was organized to evaluate the effect of adding visual icons to an analytical ideation tool. Using a panel of expert graders, the ideation results of engineering students were graded. This data was then statistically analyzed to look for correlations between the merit of the ideation outcomes and the presence/absence of visual icons. Ultimately, no correlation was found between increased merit in ideation outputs and the presence of visuals in the ideation tool. Upon reflection, it was proposed that there are simply other factors which had a bigger impact on the ideation results in the context of this experiment. Finally, the investigation added insight into the use of different parameters for measuring ideation effectiveness including quality, quantity, novelty, feasibility, and variety. The statistical analysis revealed that in this experiment a positive correlation existed between all five metrics. This implies that in certain applications researchers may be able to justify only using one criterion for evaluating creative ideation output.
en
Multimodal
Ideation
Increasing Creative Output by Visually Enhancing Engineering Design Tools
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/36022021-03-03T20:52:34Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Wu, Tong
2011-06-24T15:35:36Z
2011-06-24T15:35:36Z
2011
http://hdl.handle.net/10342/3602
Photon propagation in biological tissues can be modeled with Monte Carlo simulations numerically. However, testing a such program is difficult due to the unknown character of the test oracles. Although approaches based on Beer-Lambert law, van de Hulst's table or Radiative Transfer Equation (RTE) can be used for testing the Monte Carlo modeling programs, these approaches are only applied to the programs that are designed for homogeneous media. A rigorous way for testing the Monte Carlo modeling programs for heterogeneous media is needed.   Metamorphic testing, as an effective approach for testing systems that do not have test oracles, is one of possible supplementary approaches to test a Monte Carlo modeling program for heterogeneous media. In metamorphic testing, instead of verifying the correctness of a test output, the satisfaction of a metamorphic relation of the test outputs is checked. If a violation of the metamorphic relation is found, the system implementation must have some faults. However, checking only the metamorphic relations is not good enough to ensure the testing quality. Randomly or accidently generated incorrect outputs may satisfy a metamorphic relation as well. Therefore, it is necessary to provide a systematic approach to measure the test effectiveness of a metamorphic testing, to choose metamorphic relations, and to generate test input data.   In this thesis, we propose a new approach called self-checked metamorphic testing. In our new approach, the original metamorphic testing is extended with the evaluation of the adequacy of testing coverage criteria to measure the quality of a metamorphic testing, to guide the creation of metamorphic relations, to generate testing inputs, and to investigate the found exceptions. The effectiveness of this approach has been demonstrated through testing a parallel Monte Carlo modeling program we developed for simulating photon propagation in human skins.   This thesis contains three parts of work. In first part, the enhanced Monte Carlo modeling program was used to preliminarily study the relationship between the height of the collection lens and the contrast values of the reflectance image of the system. In second part, the homogenous part of the Monte Carlo program was validated with van de Hulst's table method, which compares the simulation results with the calculated values on van de Hulst's table. The third and the main part of the thesis is applying the self-checked metamorphic testing approach to test the Monte Carlo modeling program. Â
Computer science
Metamorphic testing
Monte Carlo modeling
Test adequacy criteria
Self-Checked Metamorphic Testing of Monte Carlo Simulation
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/60012021-03-03T21:09:08Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Shao, Hongbing
2017-01-11T19:48:41Z
2017-01-11T19:48:41Z
2016-08-18
http://hdl.handle.net/10342/6001
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
en
light Scattering
Testing Framework
ADDA
A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/17532021-03-03T20:52:44Zcom_10342_41com_10342_1com_10342_122col_10342_42col_10342_124
Bleicher, David
2013-06-06T12:18:24Z
2014-07-31T12:06:27Z
2013
http://hdl.handle.net/10342/1753
Research performed for this thesis indicates an impedance mismatch between prevailing approaches to development of service-oriented enterprise applications and the consumption capabilities of mobile devices. The rich semantics and strong validation mechanisms inherent in SOAP-based web services, common to large-scale enterprise development, introduce inefficiencies of network bandwidth consumption and serialization/de-serialization processing requirements. These inefficiencies may be financially burdensome when systems are migrated to a cloud-based hosting environment and both costly and non-performant when accessed from network and processor constrained mobile devices. Yet wholesale abandonment of established enterprise practice and legacy systems for the adoption of unfamiliar architectural styles is rarely practical.  This thesis proposes a series of incremental changes to enterprise web services architecture that, individually, provide measurable efficiency benefits both when served from the cloud and when consumed from mobile devices. The objective of this research is to quantify the benefits and illustrate trade-offs for each. Within a cloud deployment, selective application of HTTP compression is shown to yield performance improvements in excess of 40% with data transfer  reductions of up to 85%. Analysis identifies the characteristics of services that suffer degraded performance under compression, and illustrates how similar performance and data reduction benefits may be achieved through service augmentation with alternative message and request formats.  Thesis focus then turns to options for improving efficiency in the consumption of these services from native applications on prevailing mobile device platforms. Development and measurements performed for this thesis identify approaches for faster and more efficient processing of existing services on mobile devices and relates these to the developer effort required. Further enhancements to application performance and development simplicity are demonstrated through mobile consumption of the augmented services and formats proposed for optimized cloud deployment. Research for this thesis suggests that in both cloud and mobile sides of a distributed system, performance and financial benefits may be achieved while building upon, rather than replacing, existing services code and architectural patterns. Â
Computer science
Computer engineering
Optimization of Web Services for Cloud Deployment and Mobile Consumption
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/98182022-02-19T08:15:02Zcom_10342_41com_10342_1col_10342_42
Tabrizi, Nasseh
2022-02-18T15:15:13Z
2022-02-18T15:15:13Z
2019-01-01
2169-3536
http://hdl.handle.net/10342/9818
10.1109/access.2019.2937518
en_US
deep neural networks
matrix factorization
Recommender system
Customer Reviews Analysis with Deep Neural Networks for E-Commerce Recommender Systems
Article
oai:TheScholarship.intra.ecu.edu:10342/122172023-02-09T08:16:17Zcom_10342_41com_10342_1col_10342_44
Bell, Natasha L.
Hitchcock, Daniel R.
2023-02-08T19:36:38Z
2023-02-08T19:36:38Z
2022-01-03
0047-2425
http://hdl.handle.net/10342/12217
10.1002/jeq2.20309
en_US
spatiotemporal relationships
water quality parameters
wastewater treatment wetland system
Spatiotemporal Water Quality Variability in a Highly Loaded Surface Flow Wastewater Treatment Wetland
Article
oai:TheScholarship.intra.ecu.edu:10342/85792021-03-03T22:08:33Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Judge, Korin
2020-06-24T01:14:44Z
2020-06-24T01:14:44Z
2020-06-22
http://hdl.handle.net/10342/8579
Goal setting is a popular and often beneficial tool used to motivate workers worldwide. Recent research has revealed that negative side-effects including unethical behavior are associated with goal setting. In occupational safety and health (OSH), injury reduction goals are regularly used within safety incentive programs (SIP) or as standalone practice. Unethical behavior in the form of failing to report injury or illness is possible and its consequences severe: inaccurate data leads to incorrect allocation of resources for worker protection and in turn, more injury and illness. To investigate any link between OSH goal setting and injury reporting, anonymous surveys and interviews collecting worker experiences were compiled within various industries. An analysis of 31 responses using Fisher's Exact Test revealed statistically significant associations: participants whose organizations used injury-reduction goals reported that coworkers failed to report injuries more often than workers whose organizations did not use such goals. Instances of non-reporting due to incentives, coworker or supervisor disapproval as well as informal disciplinary action were associated more strongly with organizations that used goal setting than those that did not. More research into why these specific factors discourage injury reporting in the presence of goal setting is needed in order to potentially mitigate their effects.
GOAL SETTING AND UNETHICAL BEHAVIOR: IMPLICATIONS FOR OCCUPATIONAL SAFETY AND HEALTH AND THE SAFETY INCENTIVE PROGRAM
Master's Thesis
oai:TheScholarship.intra.ecu.edu:10342/47992021-03-03T20:56:31Zcom_10342_41com_10342_1col_10342_44col_10342_72
Bryan, Alex
2015-05-05T15:38:22Z
2020-05-07T08:01:51Z
2015-04
Abdelsalam,Rana. (2015). DESIGN OF A FORCE BIOFEEDBACK TOOTH EXTRACTION EDUCATIONAL DEVICE .unpublished manuscript, Honors College, East Carolina University, Greenville, N.C.
http://hdl.handle.net/10342/4799
The current jaw model used by students from the East Carolina School of Dental Medicine to practice tooth extractions does not accurately simulate the forces in a real human mouth. Originally multiple alternative designs were generated and divided into three different categories: tooth material, full or partial jaw, tooth attachment method, sensor type, and alert system. These alternatives were analyzed and the chosen ones were metal teeth for the tooth material, full jaw for the jaw type, fixed with pivot for the attachment method, strain gages on the forceps for the sensor type, and LED for the alarm system. After further analysis it was decided that the following changes would be made. Instead of having one full jaw, individual teeth were mounted on rectangular prism and cylindrical bases. Strain gages were placed on the side of the rectangular bases to determine bending moments and to the cylindrical bases to determine twisting moments applied to the teeth. The LED alarm system was used alert the user when certain moment thresholds are met. A prototype was built and tested and the design met the functional requirements of the engineering design specifications. Recommendations were provided to make the design more commercially feasible. Recommendations included assembling some of the electrical components in house, sending the models out to be manufactured by a third party, putting the LED lights closer to the model, and mounting the models in a way to better simulate a real extraction.
Extractions
Forceps
Typodont head/mouths
DESIGN OF A FORCE BIOFEEDBACK TOOTH EXTRACTION EDUCATIONAL DEVICE
Honors Thesis
oai:TheScholarship.intra.ecu.edu:10342/40142021-03-03T20:58:29Zcom_10342_122com_10342_41com_10342_1col_10342_124col_10342_45
Hosseini Tabar, Hossein
2015-10-08T16:12:54Z
2015-10-08T16:12:54Z
2012
http://hdl.handle.net/10342/4014
Fatigue is considered as one of the main causes of motor carrier crashes. To control this hazard, Federal Motor Carrier Safety Administration (FMCSA) enforces prescriptive Hours-of-Service (HOS) regulation. Over the last decade, an emerging consensus has questioned the efficiency of this perspective regulation. Consequently, a comprehensive approach called Fatigue Risk Management System (FRMS) is becoming popular in the fatigue science. FRMS has transferred the focus of responsibility for safety away from the regulatory bodies towards companies and individuals.
On the other hand, motor carriers should be able to identify which of their organizational factors have contributed to their fatigue performance; thus, they will be able to enhance their fatigue performance by improving the contributed organizational factors to their fatigue performance.
This research project aimed to investigate the organizational factors and associated safety practices that have been contributed to fatigue performance. 134 motor carriers with acceptable and unacceptable fatigue performance were studied. The Compliance, Safety, Accountability (CSA 2010) measurement system was used to determine the motor carriers' fatigue performance. The organizational factors which were studied include: management commitment, schedule design, HOS management, and training system. Constructing elements for each of these organizational factors were identified by the literature review.
Based on the results of the study, it is suggested that safety budget (as a management commitment element), percentage of drivers with regular schedule (as a schedule design element), and utilization of electronic logbook (as an HOS management element) are contributing factors to fatigue performance among the motor carriers. Consequently, motor carriers that are looking for improving their fatigue performance may consider implementing best safety practices to improve their fatigue performance.
Occupational health
Management
Organizational behavior
Transportation planning
Contributing organizational factors to driver fatigue based on the Compliance, Safety, Accountability (CSA 2010) Measurement System
Master's Thesis
mods///com_10342_41/100