Seminars
Vault
Inaugural Professorial Lectures
In this inaugural lecture, Dr. Shahriar Hossain reflected on his academic trajectory since joining The University of Texas at El Paso in 2013. He traced the evolution of his career as an educator and researcher, highlighting the pivotal experiences, strategic decisions, and professional milestones that defined his path to becoming a Full Professor. The presentation centered on the development of Dr. Hossain’s research in Artificial Intelligence, specifically its application within national security, cybersecurity, and healthcare. By sharing the formative stories and unexpected challenges behind his work, Dr. Hossain illustrated his approach to solving complex problems, mentoring students, and fostering meaningful interdisciplinary collaborations. The lecture concluded with a discussion on the lessons learned while balancing research, teaching, and service, offering a narrative on persistence, curiosity, and the evolving nature of an academic career at a Tier 1 research university.
Dr. Shahriar Hossain is a Full Professor in the Department of Computer Science at The University of Texas at El Paso (UTEP). Since 2013, Dr. Hossain has established a distinguished record of research and leadership in the field of Artificial Intelligence. His work focuses on leveraging AI to address high-stakes challenges in national security, healthcare, and cybersecurity. A dedicated mentor and collaborator, Dr. Hossain has been instrumental in the growth of UTEP’s research ecosystem. Throughout his career, he has translated curiosity-driven inquiries into impactful technological solutions, shaping both the academic landscape of his institution and the careers of the next generation of computer scientists.
Inaugural Professorial Lectures
On February 9, three faculty members share their career journeys across different disciplines and professional paths. Through personal stories and reflections, the speakers discussed how their work has evolved over time, the challenges and opportunities they’ve encountered, and how experiences in research, industry, and teaching have shaped their careers. The talks offered perspective on navigating interdisciplinary work, adapting to change, and turning ideas into real-world impact.
This talk explored the recent leaps in artificial intelligence and how AI agents are now being used in a wide range of real-world applications. Even with all the progress, there are still big challenges to deploying AI reliably outside the lab.
- Christopher Kiekintveld
Also, this talk explored the professional journey of Dr. Oscar A. Mondragón, whose background spans Communications and Electronics, Software Engineering, and Computer Engineering. Dr. Mondragón will reflect on his evolution as a researcher and educator, from his undergraduate studies in Mexico City and participation as a Co-PI on a binational research project with UTEP, to his doctoral studies and eventual return to UTEP to support new research initiatives
- Oscar Mondragon
Please contact the host if you would like to meet with the speaker on April 25.
Human and AI Decision-Making in Cybersecurity: A Multiagent Modeling
Cybersecurity threats are continually evolving, rapidly surpassing traditional static defenses. My research addresses these dynamic threats through an interdisciplinary, multi-agent modeling approach that integrates human decision-making and artificial intelligence. In this talk, I will present three interconnected projects: First, I will discuss the application of reinforcement learning to develop adaptive cyber-deception strategies capable of dynamically countering attacker behaviors. Second, I will illustrate how human-AI teaming paradigms can effectively combine human strategic oversight with AI's computational strengths, enhancing operational cybersecurity. Finally, I will flip to the attacker side and introduce cognitive adversary models designed to emulate realistic, human-like attacker decision processes. Together, these projects highlight the significant potential of integrating cognitive modeling, human-AI collaboration, and reinforcement learning to advance adaptive cybersecurity solutions.
Mehmet Belviranli is an Assistant Professor in the Computer Science Department at the Colorado School of Mines. Mehmet's research is driven by a passion for enhancing performance and efficiency within diversely heterogeneous embedded systems. He specializes in developing runtimes, scheduling algorithms, analytical models, extended memory spaces, programming abstractions, and providing OS & architecture-level support to improve the efficiency of heterogeneous computing. His contributions to the field are recognized through his publications in respected high performance computing and computer architecture conferences.
Please contact the host if you would like to meet with the speaker on April 25.
From Constraints to Capabilities: Context-aware Compute Scheduling for Cyber-Physical Systems
Cyber-physical systems (CPS), such as robots and self-driving cars, pose strict physical requirements to avoid failure. Scheduling decisions impact these requirements, presenting a critical challenge: How do we find efficient schedules for CPS with heterogeneous processing units, such that the schedules are resource-bounded to meet the physical requirements? Heterogeneous computing systems—comprising CPUs, GPUs, and domain-specific accelerators—offer significant capabilities to reduce computation time, lower energy consumption, and expand operational conditions. However, the safe and correct operation of these systems requires computation to account for dynamically changing physical constraints and environmental contexts.
This talk will highlight the speaker's research contributions toward constraint- and context-aware scheduling for CPS with heterogeneous processing units. The presentation will introduce an energy- and latency-efficient multi-model, multi-accelerator object detection approach for autonomous systems, which considers the contextual features of input frames, available accelerators, and object detection models with varying execution characteristics. Following this, the speaker will present a constraint-based autonomous workload scheduling framework for CPS, providing a generalized solution to the heterogeneous scheduling problem while accounting for the physical constraints.
Mehmet Belviranli is an Assistant Professor in the Computer Science Department at the Colorado School of Mines. Mehmet's research is driven by a passion for enhancing performance and efficiency within diversely heterogeneous embedded systems. He specializes in developing runtimes, scheduling algorithms, analytical models, extended memory spaces, programming abstractions, and providing OS & architecture-level support to improve the efficiency of heterogeneous computing. His contributions to the field are recognized through his publications in respected high performance computing and computer architecture conferences.
Please contact the host if you would like to meet with the speaker on April 25.
Synthesis and Evaluation of Creaky Voice (Vocal Fry)
In recent years, the increased use of neural networks for speech synthesis has led to significant advancements in the intelligibility and naturalness of synthesized speech. However, the reliance on neural networks has made intuitive control of prosody a challenging task, as these systems implicitly generate prosody from latent features. Furthermore, many speech synthesis engines are trained on read-speech data and are therefore limited to producing neutral prosody.
My work focuses on the generation and evaluation of synthesized speech trained on spontaneous speech data, using prosody-focused conditioning. Here, I present a tool that can synthesize speech with creaky voice. This serves a dual purpose: to improve the ability of synthesized speech to convey natural and contextually appropriate prosody, specifically by enabling the production of utterances with creaky voice where pragmatically relevant, for example for turn-taking behaviour. It also reestablishes speech synthesis as a valuable tool for phoneticians. Specifically, it enables perceptual experiments on creaky voice, which cannot be effectively studied using corpus-based or acting-based methods due to their inability to produce directly comparable utterances.
Harm Lameris is a visiting scholar at UTEP focussing on the contribution of prosody to trust building. His background includes elements of both linguistics and computer science. Harm holds a bachelor's in English language from the University of Groningen and a master's in Language Technology from Uppsala University. He is currently a fourth year PhD student at KTH Royal Institute of Technology in Stockholm where his work aims to integrate linguistic insights into synthesized speech.
Modeling and generating spoken feedback for conversational agents
Short feedback responses such as "mhm", "uh-huh", "oh", and "wow" are ubiquitous in human-human conversations, but they are often lacking in conversational agents. Although there is existing work on modeling these feedback responses, most studies have focused on predicting the timing of such responses, with less attention given to synthesizing feedback responses with appropriate prosody for their conversational context. Furthermore, these feedback responses are often grouped into two categories: continuers (e.g. "mhm", "uh-huh") which convey "I'm listening, continue speaking," and assessments (e.g. "wow", "aww") which convey attitudinal information. This presentation will discuss the communicative functions observed in human-human telephone conversations, the prosodic analysis of these feedback responses, and a listening test where participants evaluated synthesized feedback responses.
Dr. Figueroa obtained her PhD in Informatics from Aix-Marseille Université last December, where her research focused on modeling and generating spoken feedback responses (e.g. "uh-huh", "wow", "ooh") for conversational agents. Her research interests are speech and language technology, human communication, and conversational agents. Her work experience includes research positions at Furhat Robotics, Apple, ObEN and Amazon. Her Masters is from the University of Edinburgh and her Bachelors from the University of California at Santa Cruz. She is under consideration for a Research Assistant Professor position in Prosodic Aspects of Spanish, English, and Cross-Language Communication through the Regents Research Excellence Program.
Understanding potential attacks against privacy systems using advanced machine learning
Tor is one of the most widely-used anonymous network systems with millions of daily users. It supports a layered encryption using three hops to route the traffic between client and server, which in turn hides the both ends of the communication link. In this talk, we introduce two fundamental attacks on Tor, website fingerprinting and end-to-end flow correlation, which analyze network traffic metadata such as traffic volume and timing information. These styles of attacks successfully enable potential adversaries to infer users' online activities and deanonymize the communication ends, which alarms Tor researchers to design appropriate defenses guaranteeing the privacy of Tor users.
Se Eun Oh is an Assistant Professor in the Department of Computer Science & Engineering at Ewha Womans University. She leads the AI Security Lab at Ewha Womans University. Her research interests are diverse machine learning, deep learning techniques, and leverages them to promote better security as well as utility for AI-empowered devices (i.e., voice recognition, face detection, etc.) and privacy systems (i.e., Tor, HTTPS, VPN, etc.).
Dr. Oh obtained her Ph.D. in Computer Science at University of Minnesota under the supervision of Professor Nicholas Hopper. She received M.S. in Computer Science at University of Illinois at Urbana-Champaign and B.S. degree in Computer Science & Engineering at Ewha Womans University.
Understanding the Privacy Dimension of Wearables through Machine Learning-enabled Inferences
To keep up with the ever-growing user expectations, developers keep adding new features and to augment the use cases of wearables, such as fitness trackers, augmented reality head mounted devices (AR HMDs), and smart watches, without considering their security and privacy impacts. In this talk, I will introduce our recent results on understanding the privacy dimension of wearables through inference attacks facilitated by machine learning.
To start, I will present an exploration of the attack surface introduced by fitness trackers where we propose an inference attack that breaches location privacy through the elevation profiles collected by fitness trackers. Our attack highlights that adversaries can infer the location from elevation profiles collected via fitness trackers. Then, I will review the attack surface introduced by smartwatches by developing an inference attack that exploits the smartwatch microphone to capture the acoustic emanations of physical keyboards and successfully infers what the user has been typing. Finally, I will present an exploration of the AR HMD's through the design of inference attack that exploits the geometric projection of hand movements in air. The attack framework predicts the typed text on an in-air tapping keyboard, which is only visible to the user.
David Mohaisen (PhD'12, University of Minnesota) is a Full Professor of Computer Science at the University of Central Florida, where he has been since 2017. Previously, he was an Assistant Professor at SUNY Buffalo (2015-2017) and a Senior Scientist at Verisign Labs (2012-2015). His research interests are in the broad area of applied security and privacy, covering aspects of networked systems, software systems, IoT and AR/VR, machine learning, and blockchain systems. His research has been supported by a number of generous grants from NSF, NRF, AFRL, AFOSR, etc., and has been published in top conferences and journals alike, with multiple best paper awards. His work was featured in multiple outlets, including the New Scientist, MIT Technology Review, ACM Tech News, Science Daily, etc. Among other services, he has been an Associate Editor of various journals, including IEEE TDSC, IEEE TMC, and IEEE TPDS. He is a senior member of ACM (2018) and IEEE (2015), a Distinguished Speaker of the ACM and Distinguished Visitor of the IEEE Computer Society. More information about his work is at https://www.cs.ucf.edu/~mohaisen/
Towards Human-Centered AI for Scientific Discovery
In response to the escalating demands of big data in computational science and engineering, the field of data visualization and visual analytics has emerged as a critical bridge, enabling domain scientists to explore, analyze, and interpret their data. While traditional visualization methods have significantly contributed to scientific discovery, the rise of AI and machine learning offers unprecedented opportunities for further advancement. In this talk, I will first discuss how machine learning is transforming visualization through innovative techniques such as neural implicit representations, machine learning-powered accelerated visualization, and in situ visualization. I will then shift to the pivotal role of visualization in advancing AI, particularly in deep learning for scientific applications. Here, visualization serves as an indispensable tool for understanding, diagnosing, and improving AI models. Finally, I will address the challenges and opportunities in developing human-centered AI solutions tailored for the unique needs of scientific discovery. A key theme of this talk is the evolving symbiotic relationship between AI and visualization, where each enhances the other to push the boundaries of what is possible in scientific research. I will conclude by outlining my research group's vision of human-centered AI for scientific discovery and discussing future directions in this exciting field.
Chaoli Wang is a Professor in the Department of Computer Science and Engineering at the University of Notre Dame. He received his PhD in Computer and Information Science from The Ohio State University in 2006. Before joining Notre Dame, he was an Assistant Professor in the Department of Computer Science at Michigan Technological University. His primary research interests are scientific visualization and visual analytics, with an emphasis on large-scale data analysis and visualization, information-theoretic techniques, machine learning approaches, topological methods, and ensemble data visualization. He is a senior member of the IEEE and an awardee of NSF CAREER Award. He has served on the editorial boards of IEEE Transactions on Visualization and Computer Graphics, IEEE Computer Graphics and Applications, Computer Graphics Forum, and Computers & Graphics, among others. He was the Papers Co-Chair for IEEE VIS 2021 and 2022, the largest annual gathering of researchers and practitioners in data visualization research and its applications.
Predictive Intelligence to Enable Continuous Software Re-engineering and Preventive Maintenance
By predicting how software codebases evolve over time, software engineers can address maintenance and reliability issues before they manifest. The talk presents a recent collaborative NSF proposal along with early results. The talk covers broad topics, including data mining, machine learning, predictive analysis, and software engineering.
A key challenge in designing long-living software systems is the unpredictable nature of the software evolution. A sound design deemed effective today may result in a rapidly deteriorating unsustainable codebase over time. Reengineering activities that aim to address existing deficiencies may contribute to quality and reliability degradation in the long term. This challenge is particularly evident in experimental and research software systems. Scientists developing research software often follow domain-specific and knowledge-driven discovery processes. Research software is also subject to unique budgetary constraints and schedules. This, among other factors, further increase the unpredictability of the software evolution and can complicate its maintenance and sustainability.
If it is possible to predict quality and reliability issues early, then engineers can design software that avoids or minimizes the impact of these yet-to-come reliability and quality issues. Engineers can continuously re-design software as evolutionary scenarios become more probable. Moreover, it becomes possible to proactively address a reliability or maintenance issue even before it materializes.
Early results show that basic machine learning algorithms are highly effective in predicting key code quality and reliability metrics. The talk will highlight these results, and will present a recent NSF proposal submitted in collaboration with Dr. Hossain.
Mitigating Bias in Machine Learning for Fair Decision-Making
Machine learning and Artificial Intelligence are now being used in many high-stakes applications that affect our daily lives. While they provide unprecedented advancements and opportunities, they also raise new challenges related to bias, discrimination, privacy, fairness, reliability, interpretability, and trust. My research focuses on mitigating biases in machine learning and deep learning models to build fair and trustworthy prediction models. In this talk, I will focus on the bias mitigation problem.
Bias is when the predicted decision systematically favors one group (privileged) over another (unprivileged), causing discrimination against the unprivileged group. In this talk, I will define the fairness problem, present some real-life biased cases, and discuss the debiasing approaches that have been proposed. I will present three of my research efforts: (1) Fairness-Aware Classification: Criterion, Convexity, and Bounds; (2) Fair Deep Neural Networks by Interaction-based Decorrelation of Sensitive Attributes; and (3) Fair Empirical Risk Minimization.
Finally, I will discuss the recent work I am doing with Dr. Nigel Ward on Fairness in Dialog Systems. Dialog systems (chatbots and voice-based assistants) are being increasingly used for various applications (customer service, companionship, teaching, e-health, social support, and shopping, among others). However, for such systems to gain social acceptance, they need to work well for everyone regardless of gender, age, language background, socio-economic status, or other social identity characteristics. Toward this goal, we are designing automated tools to help system developers find and fix issues in the treatment of various user groups. The first major component of this work is a dataset of 4368 crowd-sourced conversations about recommendations and clarifications, with a variety of scenarios addressing multiple dimensions of diversity. This work will stimulate progress on fairness in dialog by supporting research on fair measurement, fair synthesis, and more.
Dr. Eman Gomaa is currently an assistant professor in the Department of Computer Science at UTEP. She received her PhD degree from The Hong Kong University of Science and Technology. She was a postdoctoral scholar at North Carolina State University. Her research focuses on different problems in fairness-aware machine learning, including bias mitigation, measuring fairness of machine learning, multi-objective optimization for multiple fairness criteria, individual fairness, and intersectional fairness. Dr. Gomaa also conducts research in the area of machine learning for software testing and crowdsourcing. Recently, she received NSF support for her research.
Fourier Transform and Other Quadratic Problems under Interval Uncertainty
Computers are used to estimate the current values of physical quantities and to predict their future values – e.g., to predict tomorrow's temperature. The inputs x1, …, xn for such data processing come from measurements (or from expert estimates). Both measurements and expert estimates are not absolutely accurate: measurement results Xi are, in general, somewhat different from the actual (unknown) values xi of the corresponding quantities. Because of these differences Xi -- xi (called "measurement errors"), the result Y = f(X1,…,Xn) of data processing is also somewhat different from the actual value of the desired quantity y – at least from the value y = f(x1,…,xn) that we would have obtained if we knew the exact values xi of the inputs.
In many practical situations, the only information that we have about measurement uncertainty is the upper bound Di on the absolute value of each measurement error. In such situations, if the measurement result is Xi, then all we know about the actual value xi of the corresponding quantity is that this value is in the interval [Xi – Di, Xi + Di]. Under such interval uncertainty, it is desirable to know the range of possible value of y.
In general, computing such a range is NP-hard already for quadratic functions f(x1,…,xn). Recently, a feasible algorithm was proposed for a practically important quadratic problem – of estimating the absolute value (modulus) of Fourier coefficients. In this talk, we show that this feasible algorithm can be extended to a reasonable general class of quadratic problems.
Dr. Kreinovich's main area of expertise is dealing with uncertainty and imprecision. There are two main aspects to uncertainty and imprecision. The first aspect is that data comes from measurements, and measurements are never absolutely accurate. It is important to analyze how this uncertainty affects our predictions, and how to make decisions under such uncertainty. Dr. Kreinovich has collaborated with specialists in radio astronomy, geoscience, environmental science, and other areas of biology.
Assessment of Impacts of Ambient Intonation Boosting on High Functioning Individuals with Prosody and Behaviors Stereotypical of Autism
Ambient Vocal Intonation Boosting is a recently developed extension of Vocal Intonation Boosting that modifies the acoustic resonance of a room to boost people's awareness of the intonation in their own and others' voices. Vocal intonation boosting (VIB) was initially developed to help people sing on-pitch. It has also been observed to eliminate or reduce the intensity of several socially problematic speech patterns stereotypical of autism spectrum disorders in high functioning individuals. This effect, which is consistent with current understandings of neuro-psychology has not yet been rigorously examined.
This presentation will begin with a description of intonation boosting our anecdotal observations of its effects, and short summary of relevant research in neuroscience. The remainder of the talk will describe experiments being planned to more rigorously characterize the behavioral effects of VIB on neurotypical populations with idiosyncratic behaviors stereotypical of autism. This work is being planned in collaboration with faculty in education-pysch, rehabilitation counseling, and psychology. My hope is that this presentation will elicit feedback that will be useful in refining these experimental plans.
Dr. Kreinovich's main area of expertise is dealing with uncertainty and imprecision. There are two main aspects to uncertainty and imprecision. The first aspect is that data comes from measurements, and measurements are never absolutely accurate. It is important to analyze how this uncertainty affects our predictions, and how to make decisions under such uncertainty.
Personal Data Practices and Problems
This talk will discuss the difference between using data for individual personalization versus aggregated metrics, why user data is so important to keep safe, and cases to consider when designing your systems that process user data.
Marissa Stephens is a Senior Software Engineer at Google, where she has worked for the past 5 years since graduating from MIT with a degree in Computer Science and Electrical Engineering. Her work on the Discover Recording Team focuses on collecting, processing, and storing user data in a system that is flexible to accommodate new laws and regulations, scaleable to billions of users, and adaptable to future product interactions, all while maintaining user trust and safety.
Using neural networks to make practical local schemes
Sequence fingerprinting has long been used to compare large numbers of text-based objects; be they web pages, documents, or genomes. As the sets of objects we are searching over continues to grow there is urgency in the need for improved efficiency of these fingerprinting methods. Local schemes (sometimes called local algorithms) are a category of methods that choose a fingerprint, substring of length k (k-mer), from a window of the original sequence, a continuous sequence of w k-mers. The term local is used because they make the determination of choosing the representative from a window without any external information about the rest of the sequence. This provides several useful properties, the most consequential of which are: any two strings with a long enough exactly matching substring (one that covers the whole window) will have a shared fingerprint, the one for the shared window(s); and, when defined as is done in Schleimer et al., the fingerprints selected are guaranteed to be not-to-far apart (called the winnowing guarantee). We plan to make some of the first systematic efforts to uncover how to find, store, and use, practical and general local schemes by leveraging recent advances in active and reinforcement learning.
This work is in its infancy, and the purpose of this presentation is to inform the department of the planned activities and solicit feedback in techniques that can be helpful and/or identify pitfalls before they occur. The work is a continuation of work originally performed with co-authors Guillaume Marçais and Carl Kingsford at CMU.
Deep Learning for High-Performance Scientific Computing
Deep learning is transforming many fields and shows promise for scientific applications as well. My research group has been exploring applications of deep learning in several areas. We have developed a new approach to learning reduced-order models (ROMs) for the steady-state solutions of partial differential equations (PDEs) using deep convolutional autoencoders. The advantage of convolutional layers is that the learned features remain spatially local. We show that our approach delivers highly accurate, local structure-preserving, orders-of-magnitude smaller models that have excellent predictive quality.
For the problem of performance modeling of parallel scientific applications, a key challenge is finding a model that scales accurately to large problem sizes and processor counts and performs well on unseen data. We have developed a machine learning-based approach to performance modeling using a neural network surrogate model that enables accurate prediction of parallel execution time for a wide range of problem sizes and processor counts, much faster than traditional simulation-based approaches and with better scalability than previous machine learning approaches.
I will also give a brief overview of some of our other research projects including using machine learning for algorithm selection in sparse linear algebra and using deep learning to analyze tropical cyclone satellite imagery.
Dr. Shirley Moore is a Professor in the Computer Science Department at UTEP. She is Director of the High Performance Computing Group and Site Director for the NSF Industry-University Cooperative Research Center for High Performance Reconfigurable Computing. She has over 25 years of experience in high-performance computing research with application to science and engineering, including computational chemistry, atmospheric modeling, medical imaging, and physics simulations. Dr. Moore was previously an Assistant Professor at the University of Tennessee, Knoxville, Computer Science Department. Her Ph.D. in Computer Science is from the University of Colorado Boulder. More information can be found on her web page at cs.utep.edu/svmoore.
Graph Recurrent Neural Networks: Using Recurrence to Capture Evolving Neighborhoods
Graph Neural Networks (GNNs) have become increasingly popular in machine learning applications involving graph-structured data. Most existing GNN approaches are designed using ideas from Convolutional Neural Networks (CNNs), where the graph structure plays the role of the image grid, and many such methods iteratively aggregate information within a local neighborhood of each node. However, GNNs that aggregate information via multiple iterations with the same parameters suffer from reaching a representation that contains global information too quickly, limiting their representation power. In this talk, I will present a different perspective on neighborhood aggregation, viewing a GNN as a Recurrent Neural Network (RNN) with the depth of the GNN model corresponding to the length of a sequence of neighborhood aggregations, where each "time-step" in the sequence is an expansion of the neighborhood of a node. This opens the door to using existing techniques in RNNs such as Long-Short Term Memory (LSTM) to help control the amount of information flow between neighborhood expansions. I will show that this can help GNNs reach state-of-the-art performance in various applications.
Dr. Yifan Sun is an Assistant Professor in the Computer Science Department at UTEP. He is also affiliated with the Computational Science Program at UTEP. He is interested in developing novel machine learning and deep learning techniques, especially graph neural networks, and applying them to interdisciplinary scientific domains such as social networks, wireless networks, and computational chemistry. He received his Ph.D. from the Computer Science Department at Virginia Tech in 2018, after which he worked as a postdoctoral researcher in the Department of Computational Mathematics, Science, and Engineering at Michigan State University. More information can be found at his web page: cs.utep.edu/ysun.
Toward building an automated bioinformatician: Parameter advising for improved scientific discovery
Modern scientific software has a large number of tunable parameters that need to be adjusted to ensure computational performance and accuracy of the results. When these parameter choices are made incorrectly we may overlook significant results or falsely report insignificant ones. Optimizing the parameter choices for one input may not provide an assignment that's good for another, so this parameter optimization process typically needs to be repeated for each new piece of data. Standard machine learning methods for solving this problem need to repeatedly run the software which may not be suitable in practice. Because of the time consumption required to optimize parameters and the possible loss of accuracy that can result when chosen incorrectly, the default parameter vector that are provided by the tool developer is often used. These defaults are designed to work well on average, but most interesting cases are rarely "average".
In this talk, I will describe my first steps in automatically learning the correct program configuration for biological applications using a framework we call "Parameter Advising". To apply this framework to the problem of multiple sequence alignment we developed an accuracy estimator, called Facet, to help choose alignments since no ground truth is available in practice. When we use Facet for advising on the Opal aligner we boost accuracy by 14.6% on the hardest-to-align benchmarks. For the reference-based transcript assembly problem, when applying parameter advising to the Scallop assembler we see an increase in accuracy of 28.9%. The framework is general and can be extended to other problems in computational biology and beyond. I will discuss possible areas where parameter advising could be used to automatically learn to run complex analysis software.
Dan DeBlasio is currently a Lane Fellow of the Computational Biology Department in the School of Computer Science at Carnegie Mellon University where he works in Carl Kingsford's group. He received his PhD in Computer Science from the University of Arizona in 2016 under John Kececioglu. He holds an MS and BS in Computer Science from the University of Central Florida working with Shaojie Zhang. He recently published a book on his work titled "Parameter Advising for Multiple Sequence Alignment". Dan also recently finished a two year appointment to the Board of Directors of the International Society for Computational Biology and is an advisor to the ISCB Student Council where he has held several roles.
Relativistic Effects Can Be Used to Achieve a Universal Square-Root (Or Even Faster) Computation Speedup
In this talk, we show that special relativity phenomenon can be used to reduce computation time of any algorithm from T to square root of T. For this purpose, we keep computers where they are, but the whole civilization starts moving around the computer – at an increasing speed, reaching speeds close to the speed of light. A similar square-root speedup can be achieved if we place ourselves near a growing black hole. Combining the two schemes can lead to an even faster speedup: from time T to the 4-th order root of T.
Artificial Intelligence Approaches for Wickedly Hard National Security Problems
Many national security problems are wickedly hard in that they map to computational problem classes which are intractable. This seminar aims to illuminate how artificial intelligence approaches can be created to address these problems and produce useful solutions. In particular, two promising approaches will be discussed, namely (I) computational game theory employing coevolutionary algorithms for identifying high-consequence adversarial strategies and corresponding defense strategies, and (II) hyper-heuristics employing evolutionary computation for the automated design of algorithms tailored for high-performance on targeted problem classes.
The first approach will be illustrated with the Coevolving Attacker and Defender Strategies for Large Infrastructure Networks (CEADS-LIN) project funded by Los Alamos National Laboratory (LANL) via the LANL/S&T Cyber Security Sciences Institute (CSSI). This project focuses on coevolving attacker & defender strategies for enterprise computer networks. A proof of concept for operationalizing cyber security R&D from this project demonstrated in simulation that coevolution is capable of implementing a computational game theory solution for adversarial models of network security. Currently a high-fidelity emulation framework with intelligent attacker and defender agents is being developed with as end goal to provide a fully automated solution for identifying high-impact attacks and corresponding defenses.
The second approach will be illustrated with the Scalable Automated Tailoring of SAT Solvers project funded by Sandia National Laboratories with supplemental funding from the Computer Research Association's Committee on the Status of Women in Computing Research (CRA-W), and with the Network Algorithm Generating Application (NAGA) project funded via CSSI. These projects show how hyper-heuristics can be employed to create algorithms targeting arbitrary but specific problem classes for repeated problem solving where high a priori computation costs can be amortized over many problem class instances.
Daniel Tauritz is an Associate Professor & Associate Chair in the Department of Computer Science at the Missouri University of Science and Technology (S&T), a University Contract Scientist for Sandia National Laboratories, a University Collaboration Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. His research interests focus on artificial intelligence approaches to complex real-world problem solving with an emphasis on national security problems in areas such as cyber security, cyber physical systems, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.