The battery enclosures of current electric vehicles are made of metallic alloys, specifically aluminum or steel. Replacing these metallic alloys with a lightweight material, such as carbon fiber composite, may offer significant weight savings due to its comparable strength-to-weight ratio. Carbon fiber is corrosion-resistant and can be engineered for fire resistance and electrical insulation. It can also be fine-tuned for specific applications and performance needs, such as "crashworthiness".
Designing a carbon fiber-based battery enclosure for crash performance through trial-and-error experiments can be extremely laborious and inefficient. This inefficiency can be alleviated by using virtual manufacturing and structural analysis software. A simulation software chain allows for the virtual manufacturing and crash-testing of the battery enclosure in a single process. However, these numerical simulations are computationally expensive, time-consuming, and may require significant user interaction. Finding optimal design parameters within a reasonable time-frame can be extremely challenging.
The first part of this dissertation addresses the forward problem of accelerating the design of battery enclosures for crash performance. It involves developing a machine learning-based surrogate model of the simulation workflow that can provide quick, approximate results in a fraction of seconds. This can further support design space exploration studies.
Physical phenomena in engineering design are governed by differential equations, typically solved in a forward manner with known physical parameters, initial and/or boundary conditions, and a source term. However, there is often a need to reconstruct the source term from available measurement data, which may be corrupted with noise, along with the initial and/or boundary conditions, and physical parameters. These types of problems are known as inverse problems, more specifically, inverse source problems. Inverse source problems are often ill-posed and are usually solved by iterative schemes and optimization techniques with regularization, which can be time-consuming. In recent years, machine learning approaches have shown promise in managing ill-posed problems and handling noisy data.
The second part of this dissertation addresses a specific type of inverse source problem, known as the dynamic load identification problem, which involves determining the time-varying forces acting on a mechanical system from the sensor measurements. The study begins with the development of a deep learning model that leverages physics information to infer the forcing functions of both linear and nonlinear oscillators from observational data. Furthermore, the study leads up to a development of a physically consistent surrogate model that is capable of providing robust predictions from the noisy observations without the need to explicitly solve the differential equation.
Patients requiring admission to the Trauma Intensive Care Unit (TICU) represent some of
the most critically ill and complex cases within intensive care. These patients, often suffering
from significant trauma to vital areas, may necessitate prolonged enteral feeding, frequently
leading to the insertion of gastrostomy tubes. Despite the critical nature of gastrostomy tube
management for patients with severe trauma and the need for enteral feeding, there is a gap in
knowledge and confidence in this area. This gap necessitates targeted educational programs to
improve patient outcomes. This quality improvement project focused on the nursing staff in the
Trauma Intensive Care Unit (TICU) at a large academic medical center. The nurses received a
comprehensive education module developed according to Lippincott standards, which covered
the different types of gastrostomy tube types, nursing interventions, and documentation practices.
The module included a didactic component and hands-on practice with gastric tube models. A
pre-and post-test knowledge check was conducted to evaluate the learning outcomes. All 43
TICU staff registered nurses at the facility participated. After the educational module's
implementation, significant improvements were observed in nursing staff knowledge regarding
gastrostomy tubes. The median score for the pre-test was 70%, increasing to 100% on the post-
test. Wilcoxon sign-rank test showed a statistically significant difference between pre- and post-
test scores, z = 5.207, p < .001. The results demonstrate the effectiveness of the education
module in improving TICU nurses' knowledge of gastric tube care.
Sensor placement and Informative Path Planning (IPP) are fundamental problems that frequently arise in various domains. The sensor placement problem necessitates finding optimal sensing locations in an environment, enabling accurate estimation of the overall environmental state without explicitly monitoring the entire space. Sensor placement is particularly relevant for problems such as estimating ozone concentrations and conducting sparse-view computed tomography scanning. IPP is a closely related problem that seeks to identify the most informative locations along with a path that visits them while considering path constraints such as distance bounds and environmental boundaries. This proves useful in monitoring phenomena like ocean salinity and soil moisture in agricultural lands—situations where deploying static sensors is infeasible or the underlying dynamics of the environment are prone to change and require adaptively updating the sensing locations.
This thesis provides new insights leveraging Bayesian learning along with continuous and discrete optimization, which allow us to reduce the computation time and tackle novel variants of the considered problems. The thesis initially addresses sensor placement in both discrete and continuous environments using sparse Gaussian processes (SGP). Subsequently, the SGP-based sensor placement approach is generalized to address the IPP problem. The method demonstrates efficient scalability to large multi-robot IPP problems, accommodates non-point FoV sensors, and models differentiable path constraints such as distance budgets and boundary limits. Then the IPP approach is further generalized to handle online and decentralized heterogeneous multi-robot IPP. Next, the thesis delves into IPP within graph domains to address the methane gas leak rate estimation and source localization problem. An efficient Bayesian approach for leak rate estimation is introduced, enabling a fast discrete optimization-based IPP approach. Lastly, the thesis explores sensor placement in graph domains for wastewater-based epidemiology. A novel graph Bayesian approach is introduced, facilitating the placement of sensors in wastewater networks to maximize pathogen source localization accuracy and enable efficient source localization of pathogens.
This research contributes to understanding the effects of local government urban regulatory policy and actions of private actors on a neighborhood’s housing market using the fast-growing city of Charlotte, North Carolina, as a case study.
The first article of this research examines private actors in the rental housing market and their impact on neighborhood outcomes. The analysis focuses on how exclusionary criteria used in online rental advertisements vary spatially and how they potentially impact neighborhood outcomes. It also focuses on how various factors such as race, income, and platform (Zillow vs. Craigslist) influence the presence of exclusionary criteria in rental advertisements.
The second article situates private actors' actions within the scope of a neighborhood’s changing characteristics and their effects on a neighborhood’s capital investment exhibited through housing renovation activity. The analysis employs 10-year longitudinal parcel-level permitting data on housing renovation activity, housing and neighborhood-specific variables, and spatial statistical techniques to assess if a change in a neighborhood’s prevailing characteristics influences housing renovation activity.
The third article analyzes the effects of local government regulatory policies on a neighborhood's housing market, specifically housing code violations that are resolved with repairs. The chapter hypothesizes that housing code violations, when solved with repairs, will significantly affect a neighborhood’s housing market by increasing home sales and rental prices or contribute to the loss of affordable housing as landlords withdraw their property from the housing market. To test this hypothesis, the research uses longitudinal data on home sales prices, gross rent, housing code violations, and other housing and neighborhood-specific variables. It employs spatial statistics techniques to model their longitudinal relationships.
These three articles collectively contribute to our understanding of neighborhood housing markets analyzed through the lens of private investments and practices and urban regulatory policy adopted by local governments in fast-growing cities like Charlotte. Furthermore, these chapters create a framework that shows how spatial statistics tools, natural language processing techniques, and novel and traditional data can be used to understand the relationship between a neighborhood’s housing market and neighborhood change.
The recently introduced class of two-dimensional materials, monolayer Transition Metal Dichalcogenides (TMDs), are emerging as highly promising candidates to enhance data transfer capacity in the field of Valleytronics. Strong “atomic spin-orbit interaction” in monolayer TMDs locks spin of electrons to degenerate valleys with different momenta. These locked valley-spin pairs respond differently to different circular polarizations of light. However, this feature vanishes at room temperature. To address this issue, the coupling between the exciton emissions and photonic modes are under extensive investigation.
This dissertation explores the control over TMD valley-polarized emission by coupling the exciton emission to the plasmonic mode. Specifically, we take advantage of the strong coupling between monolayer WS2 and metallic nanogrooves to enhance information routing, thereby achieving higher data capacity.
The first part of this study is focused on analyzing the interdependence between the nanogroove parameters and the coupling condition. In the second part, we will demonstrate the k-space separation of valley excitons in monolayer TMDs through the "optical spin-orbit interaction." This separation implies that the helicity of photons determines a preferred emission direction.
This research can serve as a guideline for designing structures and pave the way to transport and read out the spin and valley degrees of freedom in two-dimensional materials. By addressing current challenges in the field of Valleytronics, it offers guidance for future advancements in this area.
Transit Signal Priority (TSP) is a traffic signal control strategy that can provide priority to transit vehicles and thus improve transit service. However, this control strategy generally causes adverse effects on other traffic, which limits its widespread adoption. The development of Connected Vehicle (CV) technology enables the real-time acquisition of fine-grained traffic information, providing more comprehensive data for the optimization of traffic signals. Simultaneously, optimization algorithms in the field of TSP have been advancing at a rapid pace. Artificial intelligent (AI)-powered techniques, such as Deep Reinforcement Learning (DRL), have become promising approaches for addressing TSP problems recently. In this study, we developed adaptive TSP control frameworks for both isolated intersection scenarios and multiple intersection scenarios, assuming the implementation of CV technology. Leveraging the comprehensive traffic data obtained from CVs, our frameworks employ both single-agent DRL and multi-agent DRL techniques to address optimization problems. The controllers, based on our proposed frameworks, were tested in simulation environments and compared with various widely used traffic signal controllers across different scenarios, demonstrating superior performance.
Federal legislation for students with disabilities mandates that all students receive appropriate and relevant instruction across environments to improve postsecondary outcomes across domains. Teachers and parents alike have found that one way to meet individual student needs and increase instructional opportunities for students with disabilities is through the use of purposeful and meaningful community-based instruction (CBI). For students with extensive support needs (ESN), however, the practical implementation of CBI within the classroom and community setting may pose several barriers and relies heavily on teacher and family knowledge of community engagement strategies. Previous research in the area of CBI indicates that through the use of evidence-based practices CBI is effective in teaching skills across the four identified domains, which include leisure, vocational, community engagement, and daily living. In an attempt to bridge gaps in the available literature and research in the area of CBI, this study evaluated the effects of an intervention package comprised of three evidence-based practices (video modeling, visual supports, and system of least prompts), goal setting, and collaboration, through peer-implemented instruction, in order to teach leisure skills to young adults with ESN in relevant community settings. The experimental design was a multiple probe across skills replicated across two participants. Two young adults, ages 21 and 22 with ESN participated in the study, along with two of their same aged peers, and relevant team members/key stakeholders (i.e., program director at their university, parents). Three community-based leisure skills across three environments were chosen with a specific skill targeted at each location. The intervention was effective for teaching these leisure skills to the participants across all three community locations. In addition, they were able to generalize and maintain these skills at the conclusion of the study. Social validity measures indicated that all participants felt that these were relevant skills for the participants and that their role in this process was valuable. The findings from this study can be used to guide future research in the area of CBI with students of all ages to support them as they access community settings.
Random antireflective surface nanostructures (rARSS) enhance transmission by reducing the electromagnetic impedance between optical indices across a boundary, serving as alternatives for traditional coating techniques. Understanding and quantifying the role of randomness of the surface nanostructures remain elusive, without a comprehensive model that can accurately predict the wideband spectral response of randomly nanostructured surfaces based on causal physical principles. Effective-medium approximations (EMA) emulate the randomly structured surface as a sequence of homogeneous film layers, failing to predict the critical (or cut-off) wavelength above which the enhancement effect is observed and below which bidirectional optical scatter is prominent. Analyzing near-field or far-field radiance due to wavefront propagation through randomly nanostructured surfaces requires high computational budgets, which are challenging for randomly distributed features with varying-scale boundary conditions.
Deterministic periodicity is considered a sufficient surface geometrical descriptor for regular (or long-range repetitive) nanostructured surfaces, whereas characterizing random surface features is based on first-order statistical evaluations or macroscopic averages, such as autocorrelation lengths, which introduce significant ambiguity in subwavelength scales. What constitutes the "randomness" of rARSS, beyond standard surface topography measures, is subjective. Conventional optical surface structure characterization, disregards aspects of nanoscale morphological attributes, mainly spatial configuration or organization, due to resolution limitations of metrological instruments. The organizational aspect of nanostructured features can significantly impact the macroscopic Fresnel reflectivity radiance, bidirectional scattering, and axial transmission enhancement (cooperative-interference effect).
In this work, transverse granule population distributions and their corresponding granular organization at the nanoscale, is determined using a variation of the Granulometric image processing technique. Various rARSS surfaces were fabricated, resulting in unique surface modifications and spectral performance, as observed with respectively scanning electron microscope (SEM) micrographs and spectral photometry. The approach to quantify randomness or complexity of the nanostructures, presented in this work, is based on Shannon’s entropy principles. Resolution limitations from conventional characterization techniques using non-invasive confocal microscopy and spectroscopic ellipsometry is discussed. Statistical quantification of nano-structural randomness using Shannon’s entropy is proposed as a solution to characterize the unique degree of disorder on the surfaces. A figure-of-merit is derived and computed from surface organization state variables, and it is proposed as a heuristic parameter to predict the transition from spectral scattering to the transmission enhancement region. This multivariate problem is addressed by accounting for the conditional probability dependence of granule populations as functions of granule dimensions and their corresponding proximity distributions, thereby laying the foundations for a surface microcanonical ensemble model, establishing a link between surface morphological descriptors and spectral variables.
This study explored teacher and student perspectives on mandated school uniforms. Debate exists over the appropriateness of uniforms, with some stakeholders suggesting positive outcomes while others bemoan limits on student expression. This study sought to fill a gap in research specific to middle school uniform use by exploring teachers' and students' perceptions. This research also considered the intersection of gender and diversity issues with uniform policies because these topics are becoming more prominent in the discussion. Four focus groups were conducted, two at a suburban school and two at an inner-city school. Findings suggested that teachers and students at the suburban middle school experienced uniforms more positively than their counterparts in the inner city. Additionally, findings indicated that female students had more negative experiences with uniform policies and their enforcement. From a social identity perspective, this study suggests that the group experience of the same uniform could have a positive or negative impact. When people feel the need for a positive group self, they demonstrate ingroup bias, which could help or hamper the implementation of school uniforms. This research helps bridge the gap in empirical literature within the context of social groups and critical theory to offer recommendations for administrators and policymakers regarding school uniforms in public middle schools. Results can direct further research while raising awareness of issues administrators should address when considering the implementation of a school uniform policy.
Large cohort studies under simple random sampling could be prohibitive to conduct with a limited budget for epidemiological studies seeking to relate a failure time to some exposure variables that are expensive to obtain. In this case, two-phase studies are desirable. Failure-time-dependent sampling (FDS) is a commonly used cost-effective sampling strategy in such studies. To enhance study efficiency upon FDS, counting the auxiliary information of the expensive variables into both sampling design and statistical analysis is necessary.
In survival analysis, it's commonly assumed that all subjects in a study will eventually experience the event of interest. However, this assumption may not hold in various scenarios. For example, when studying the time until a patient progresses or relapses from a disease, those who are cured will never experience the event. These subjects are often labeled as ``long-term survivors'' or ``cured'', and their survival time is treated as infinite. When survival data include a fraction of long-term survivors, censored observations encompass both uncured individuals, for whom the event wasn't observed, and cured individuals who won't experience the event. Consequently, the cure status is unknown, and survival data comprise a mixture of cured and uncured individuals that can't be distinguished beforehand. Cure models are survival models designed to address this characteristic.
Chapter~2 discusses the semiparametric inference for a two-phase failure-time-auxiliary-dependent sampling (FADS) design that allows the probability of obtaining the expensive exposures to depend on both the failure time and cheaply available auxiliary variables. Chapter~3 considers the generalized case-cohort design for studies with a cure fraction. A few directions for future research are discussed in Chapter~4.