Past Seminars Archive
Computer arithmetic for DNN acceleration or how to compute right with errors and do it fast
Université de Nantes
When: September 1, 11:00 am - 12:00 pm,
Where: BUSN 302
Numerical algorithms rarely use exact arithmetic to perform their computations; instead they employ more efficient floating-point or fixed-point representations. The use of these finite-precision numerical formats results in computational errors that influence the result. On the other hand, the choice of precision influences the performance (latency, memory usage) of the implemented algorithm.
In the search of a sweet spot between the accuracy and the efficiency, we first establish the relation between the precision and accuracy for a given algorithm and then optimise arithmetic parameters (data formats, function approximations, hardware operators) such that accuracy requirements are satisfied at minimal cost. Ideally, this process should be automatic and applicable to large-scale systems.
In this talk we showcase a range of tools and techniques for arithmetic optimisations for DNN inference. We act on three levels:
- Automatic analysis of numerical quality of a DNN model and establishing the minimal accuracy requirement for each layer of a network;
- Building custom hardware operators for dot-product and employing the power of Integer Linear Programming to optimise the resources;
- Retraining DNNs in a hardware-friendly manner.
Anastasia Volkova is an Associate Professor in Computer Science at the University of Nantes, France. She conducts her research at the LS2N lab in team OGRE. Previously, she was a research resident at Intel Corporation in San Diego, USA and a postdoctoral researcher at INRIA in Lyon, France. She is a former PhD student of Christoph Lauter and Thibault Hilaire at Sorbonne University UPMC in Paris, France.
Host: Christoph Lauter
Development of a Digital Twin-based Diabetes Decision Support System
When: April 28, 11:00 am - 12:00 pm,
Where: BUSN 332
The diabetes diary, diagnostic and therapeutic toolkit being developed under the auspices of the Physiological Controls SystemResearch Center of Óbuda University will be presented. Examples of such developments include AI-powered blood glucose prediction and closed-loop control systems will be introduced. Furthermore, the beta version of these developments and the University of Óbuda's cloud services framework will be presented.
GYORGY EIGNER (GSM’13–M’16–SM’20) got his B.Sc. degree in Mechatronical Engineering at Obuda University, Bánki Donát Faculty of Mechanical and Safety Engineering in 2011, and M.Sc. degree in Biomedical Engineering at Budapest University of Technology and Economics in 2013. He received his Ph.D. degree at Obuda University in 2017, where he is currently associate professor and the Dean of John von Neumann Faculty of Informatics. His main research focus is the application of advanced control methods in physiological relations. On these areas he published more than 100 scientific work. He is the professional leader of the Zsambek Future Industries Science Park. He is the Co-Chair of the Computational Cybernetics Technical Committee, Director of the Robotics Special College of the Obuda University.
Performance Prediction Toolkit (PPT): A Scalable Performance Modeling and Prediction Approach for GPUs and CPUs
Professor Hameed Badawy
Host: Dr. Moore
When: Friday, March 3, 11:00 A.M. - 12:00 P.M.
Where: CCSB 1.0202
With the advent of Machine Learning (ML) in many fields, highly efficient computational resources are necessary. GPUs top the charts for such ML-oriented computational resources. Measuring the performance of GPUs provides insights into what is achievable. We will present PPT focusing on PPT-GPU, a scalable performance prediction toolkit for GPUs. PPT-GPU achieves scalability through a hybrid high-level modeling approach where some computations are extrapolated, and multiple parts of the model are parallelized. The tool’s primary prediction models use pre-collected memory and instructions traces of the workloads to capture the dynamic behavior of the kernels accurately.
PPT-GPU reports an extensive array of GPU performance metrics accurately while being easily extensible. We use a broad set of benchmarks, including DeepBench, an ML workload, to verify the prediction accuracy. We compare the results against hardware metrics collected using vendor profiling tools and cycle-accurate simulators. The results show that the performance predictions are highly correlated to the actual hardware (MAPE: < 16\% and Correlation: > 0.98) on NVIDIA’s V100 GPUs. Moreover, PPT-GPU is orders of magnitude faster than cycle-accurate simulators. This comprehensiveness of the collected metrics can guide architects and developers to perform design space explorations. Furthermore, the tool’s scalability enables efficient and fast sensitivity analyses for performance-critical applications. We will show some results on NVIDIA’s A100. We will quickly touch on our work on PPT modeling of CPUs
Prof. Abdel-Hameed (Hameed) A. Badawy (Senior Member, IEEE) received a B.Sc. degree (Hons.) in Electronics Engineering from Mansoura University, Egypt, with a concentration on computers and control systems. He obtained his M.Sc. and Ph.D. in computer engineering from the University of Maryland. He is a tenured associate professor with the Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, NM, USA. He is the Computer Area Chair in the department. Also, he is a joint faculty with the Los Alamos (LANL) and Berkeley (LBNL) labs. He has been a visiting research scientist with the New Mexico Consortium. He was a lead research scientist with the High-Performance Computing Laboratory (HPCL) at George Washington University. His research interests include Performance Modeling, Prediction, Monitoring, and Evaluation, High-Performance Computer Architecture, Post-Moore’s law hardware, Hardware Security including IoT devices, Machine Learning for Computational problems, and Quantum Computing.
TACOS and FAJITA para todos!
Marcelo Frias, Professor at Instituto Tecnológico de Buenos Aires
Host: Dr. Vladik Kreinovich, email@example.com, Dr. Maria Mariani, firstname.lastname@example.org
When: Tuesday, October 18, 1:30-2:30pm
Where: LART 205
In this talk I will present the current version of the bug-finding tool TACO. TACO, for Translation of Annotated COde, allows one to automatically detect if a program annotated with a contract complies with, or violates, said contract. In case a contract violation is detected, a program input exposing the violation is automatically synthesized by TACO.
Along the talk I will use the tool and explain the technical foundations that make TACO highly effective to detect failures in pogroms over dynamically allocated data structures (a particularly difficult domain). Also, I will mention applications of TACO in automated test input generation through the FAJITA tool, and automated program repair.
While the talk should be of particular interest for researchers in Software Engineering, I believe undergrad students are the ones that will be most impressed and are specially welcomed.
Professor at Instituto Tecnológico de Buenos Aires
Program Committee member of FM’19 (Formal Methods 2019), International Conference on Formal Methods, Porto, Portugal, 7-11 October, 2019.
Program Committee Member, Automated Software Engineering, ASE 2019.
Program Committee Member, International Conference on Software Testing, ICST 2017.
Program Committee Member, Automated Software Engineering, ASE 2016.
Program Committee Member, International Conference on Software Test- ing, ICST 2016.
Program Board Member, International Conference on Software Engineering, ICSE 2016.
An introduction to the Aberdeen Architecture: High Assurance Hardware State Machine Microprocessor Concept
Patrick Jungwirth, Ph.D., Computer Engineer, DEVCOM Army Research Laboratory
Host: Jaime Acosta, email@example.com
When: Friday, October 14th, 11:00 A.M - 12:00 P.M.
(A Teams link for the virtual talk will be sent prior to the talk)
Where: CCSB G.0208
In a traditional computer, an operating system manages computer system resources. Current microprocessors execute or run instructions without any verification or authentication. There is no difference between safe instructions, coding errors, and malicious instructions. Complete mediation is a computer security principle meaning to verify access rights and authority for every operation. The Aberdeen Architecture enforces Saltzer and Schroeder’s security principles down to the instruction execution level and it provides near to full complete mediation for instruction execution.
The Aberdeen Architecture is also designed to block information leakage. It uses state machines to monitor and manage the execution pipeline. The state machine monitors completely virtualize the execution pipeline. State machine monitors manage and track information flows: (1) instruction execution flow, (2) control flow, (3) data flow, and (4) memory access flow. Each information flow class has an allowed set of operations. Security properties of data are completely tracked and managed from data creation (data source) until data is retired (data sink)
Patrick Jungwirth is currently a computer architecture researcher at the Army Research Lab. He is also an adjunct professor teaching computer architecture courses at the University of Maryland, Baltimore County. He previously worked at the Aviation and Missile, Research Development and Engineering Center. He holds two computer architecture patents with one pending. His research interests cover computer architectures and digital signal processing. He has published a textbook on sampling theory and analog-to-digital conversion.
Industry 4.0 and Smart Cities
Host: Vladik Kreinovich, firstname.lastname@example.org
When: Thursday, October 6th, 1:30 P.M. - 2:50 P.M.
Where: Lart 305
Main characteristics of Industry 4.0 include horizontal integration through networks in order to facilitate an internal cooperation, vertical integration of subsystems within the factory in order to create a flexible and adaptable manufacturing systems. Another basic feature of Industry 4.0 is a Cyber-Physical System (CPS) as a system of collaborating computational elements controlling physical entities up to creation Digital Twin of the factory.
The Smart City approach, developed at Czech Technical University in Prague, is based on these principles of Industry 4.0, which are further researched and practically applied in “Smart Europská Avenue” that plays the role of a digital testbed or living laboratory of Prague city.
Smart Evropská Avanue uses a variety of sensors, starting with physical detectors, cameras, and ending with space imaging (weather prediction, city temperature maps, and emission maps). It should be noted that even a vehicle or a mobile phone in this concept becomes an intelligent sensor providing important data. The city management, thanks to current data, moves from the original predefined dynamic plans to adaptive control algorithms that ensure the coordination of entire territorial units. Different simulation tools are used to validate individual strategies. In virtual space, it is much easier to model responses to different types of extraordinary events.
Prof. Miroslav Svítek graduated in radioelectronic from Czech Technical University in Prague, in 1992. In 1996, he received the Ph.D. degree in radioelectronic at Faculty of Electrical Engineering, Czech Technical University in Prague. Since 2008, he has been Full professor in engineering informatics at Faculty of Transportation Sciences, Czech Technical University in Prague. In 2010 – 2018 he was Dean of Faculty of Transportation Sciences, Czech Technical University in Prague. Since 2018, he has been Visiting professor in smart cities at University of Texas at El Paso, USA. The focus of his research includes complex system sciences and their practical applications to Intelligent Transport Systems, Smart Cities and Smart Regions. He is the author or co-author of more than 200 scientific papers and 10 books.
Early Exascale Hardware and Applications
Shirley Moore, University of Texas at El Paso
Host: Shirley Moore
When: Friday, October 7th, 11:00 A.M
Where: CRBL 205
An exascale computer is capable of executing at least 10 18 double precision floating point operations per second. The world’s first exascale computer is Frontier at Oak Ridge National Laboratory. Frontier was announced in June 2022 as the world’s fastest supercomputer on the TOP500 list. This achievement is the result of several years of collaboration between government agencies, lab researchers, and hardware vendors on the design of three exascale machines: Frontier, Aurora, and El Capitan. I will describe the process that was used to evaluate and select from the candidate exascale architectures and then give details of the selected architectures. Finally, I will describe three exascale application efforts I have been involved with in the areas of fusion plasma modeling, simulation of heterogeneous catalysis using computational chemistry, and simulation of cancer cell transport through the human circulatory system.
Shirley Moore is an Associate Professor in the Computer Science Department at the University of Texas at El Paso. Prior to returning to UTEP in fall of 2020, she was a Senior Computer Scientist in the Future Technologies Group at Oak Ridge National Laboratory (ORNL) for four years, where she led ORNL participation in several Exascale Computing Project (ECP) efforts. She received her PhD in Computer Sciences from Purdue University in May of 1990 in the area of distributed database systems. Her research interests are in parallel and distributed computing, including edge computing, with a focus on performance evaluation and optimization.
Programmatic Reinforcement Learning for All
Ashutosh Trivedi, University of Colorado Boulder
Host: Saeid Tizpaz Niari
When: Friday, September 30th, 11am-12pm
Where: CCSB G.0208
Reinforcement Learning (RL) is an optimization-based approach to problem-solving where learning agents rely on scalar reward signals to discover optimal solutions. The recent success of RL has demonstrated its potential as a viable alternative to "human" programming. However, observing these success stories closely, it is evident that significant expertise is required in deploying RL. This expertise is required in designing a suitable approximation architecture and designing a suitable "flat" representation of the environment in a form required by the architecture. Besides, it is also needed to specify objectives in the language of scalar rewards. This rigid interface---in the form of feature constructions, manual approximations, and reward engineering---between the programmers and the RL algorithms is cumbersome and error-prone. The resulting lack of usability and trust contributes towards barriers to entry in this promising field. My group is working towards democratizing reinforcement learning by developing principled methodologies and powerful tools to improve the usability and trustworthiness of RL-based programming at scale.
The aforementioned low-level interactions between the programmers and the RL are akin to programming systems in a low-level assembly language. I envision a programmatic approach to RL where the programmers interact with the RL algorithms by writing programs in a high-level programming language expressing the simulation environment, the choices available to the learning agent, and the learning objectives, while an underlying “interpreter” frees the programmer from the burden of feature construction and approximation heuristics demanded by the state-of-the-art RL algorithms. We dub this setting high-level programmatic reinforcement learning (or programmatic RL for short).
To realize the promise of improved usability of programmatic RL, we need RL algorithms capable of efficiently handling rich programmatic features (functional recursion and recursive data structures) and complex dynamical models (governed by ordinary differential equations) while guaranteeing convergence to the optimal value. To enable transparent and trustworthy RL, we need translation schemes to compile learning requirements expressed in high-level languages to scalar reward signals. In this talk, I will summarize our efforts and breakthroughs towards a framework for programmatic RL capable of reasoning with formal requirements, real-time constraints, and recursive environments.
Ashutosh Trivedi received his B.Eng. in Computer Science from NIT Nagpur in 2000, his M.Tech. in Electrical Engineering from IIT Bombay in 2003, and his Ph.D. in Computer Science from the University of Warwick in 2009. He was a postdoctoral researcher at the University of Oxford and at the University of Pennsylvania between 2009 and 2012. He was an assistant professor of Computer Science at IIT Bombay from 2013-2015. He joined the University of Colorado Boulder in 2015, where he is currently an assistant professor of Computer Science. He is also a member of the Programming Languages and Verification (CUPLV) Group. His research interests include formal methods, optimization, and game theory with applications in trustworthy AI, cyber-physical systems, software security, and fairness in AI. He is a recipient of an NSF CAREER award, two AFRL fellowships, and a Liverpool-India fellowship.
Internal eigenvalues using constraint interval theory with applications
Marina Tuyako Mizukoshi, UFGo
Host: Professor Kreinovich
When: Friday, August 5, 11:00 A.M - 12:00 P.M
Where: CCSB G.0208
This talk will introduce the constraint interval theory (CI). The CI will be applied to study interval eigenvalues in interval symmetric matrix. The interval eigenvalue is important to obtain conditions about stability in a dynamical systems with uncertainty in their coefficients.
Professor Mizukoshi, of the Institute of Mathematics and Statistics at the Universidade Federal de Goias in Brazil, is currently Visiting Professor at the University of Colorado, Denver.
Computational Intelligence For Engineering Solutions: Invariance-Based Approach
(a brief overview of the Fall 2022 class CS 5354/CS 4365)
When: Friday, April 22, 2:30pm - 3:30pm
Where: Business, room 318
The main purpose of computers is to process real-life data, so that we will able to understand the current state of the system and to predict its future behavior. In some situations -- e.g., in basic mechanics -- we have fundamental from-first-principles laws that enable us to make the corresponding predictions. However, in many other situations,
especially in engineering, we only have approximate empirical formulas. For example, it is not possible to predict, based on the
first principles, how pavement will deteriorate with time, or how people will change their opinions about goods.
In such situations, we face the following problems:
- Why these formulas? Users are usually reluctant to use purely empirical formulas. Reason: there is no guarantee that these
formulas will work in a new situations. It is therefore desirable to come up with theoretical explanations for these formulas.
- Maybe these formulas are not the best. Within these theoretical explanations, are the current formulas most adequate? And if they are not the best, what are the better formulas?
- What next? Empirical formulas are usually approximate. If we want a more accurate description, we need more complex, more
detailed formulas. Of course, the ultimate test is comparing with the observations and measurement results. In view of the
theoretical explanations, what are good candidates for such more complex formulas?
Similarly, in many engineering applications, there are semi-empirical methods for solving the corresponding problems. In such cases, similar problems appear.
In this course, on the example of engineering and computer science applications, we will explain how invariance-based techniques – widely used in physics – can help explain and improve empirical formulas and methods.
See https://www.cs.utep.edu/vladik/cs5354.22/syllabus.html for a detailed description of the class
Socio-Cognitive Factors of Decision making in Spear-Phishing Attacks
Dr. Prashanth Rajivan
When: Friday, April 15, 11am - 12pm
Despite significant advancements in security technologies, phishing attacks continue to be rampant and successful because it is cognitively challenging for humans to distinguish phishing emails from real messages. One phishing email and one vulnerable person is all it takes for an attacker to succeed. To combat the rampant phishing threats, companies rely primarily on machine learning algorithms for automated detection, and on the human ability to detect attacks that algorithms miss. Although current algorithms are successful in detecting known mass-phishing messages, they do not guarantee complete protection.
In this talk, I will discuss experiments we are conducting to understand human decision-making process in the context of phishing. First, I will describe a new simulation paradigm we have developed for studying human behavior in spear-phishing attacks from both the attacker and end-user perspective. Next, I will present results from a cognitive model based on Instance Based Learning Theory developed to predict and analyze human response to phishing emails obtained from a laboratory experiment. I will describe the effectiveness of integrating natural language processing methods, such as, GloVe, and BERT with cognitive models to predict human response to phishing emails. Finally, I will introduce follow-on research directions I am currently pursuing in phishing and other related areas such misinformation.
Prashanth Rajivan is an assistant professor of Industrial and Systems Engineering and adjunct assistant professor of human centered design and engineering at the University of Washington. His research agenda is on the intersection of human factors and computer security. His areas of interests include security and privacy decision making, simulation and modeling, computer supported cooperative work, and applied cognitive science. Prior to this appointment, Prashanth Rajivan was a Postdoctoral Research Fellow at the Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh. He holds a Ph.D. in Human Systems Engineering (2014) and M.S. in Computer Science (2011) from Arizona State University, USA. He is the author of several peer-reviewed publications and book chapters. His work on multi-agent models of teamwork in cyber defense was awarded the best student paper at HFES annual conference in 2014. His dissertation work was a finalist in the Human Factors Prize on Cyber Security in 2017. His research is funded by NSF (CAREER), AHRQ, Starbucks and CISCO.
Dr. Palvi Aggarwal
CS Teaching Innovations Talk
Dr. Diego Aguirre; Dr. Eric Freudenthal
When: Friday, April 1, 11am - 12pm
Where: CCSB G.0208
In this short talk, I will present strategies I have explored in my attempt to cultivate a growth mindset among CS 2302 (Data Structures) students. In particular, I will present: 1) alternative grading systems that reward growth without sacrificing rigorousness, 2) reflection-based assignments that reinforce deliberate practice tenets, and 3) sample course assignments and activities designed to enhance students’ competencies in key problem-solving skills (e.g., problem decomposition, abstraction, solution analysis and prioritization).
What Happened when I (Sorta) Let Students set the Rules. The pandemic hit our students badly and the extension of conventional classroom policies to online seemed punitive. Instead, I asked students what they wanted and (mostly) gave it to them. After a bit of adjustment and reality checks, this strategy seems to have improved both teaching teams’ and student experiences. We’re back in the classroom, and my pandemic-motivated policies are (mostly) working out fine.
Fairness-aware Configuration of Machine Learning Libraries
Dr. Saeid Tizpaz-Niari
When: Friday, February 18th 2022, 11:00 AM
Where: Business 331
This paper investigates the parameter space of machine learning (ML) algorithms in aggravating or mitigating fairness bugs. Data-driven software is increasingly applied in social-critical applications where ensuring fairness is of paramount importance. The existing approaches focus on addressing fairness bugs by either modifying the input dataset or modifying the learning algorithms. On the other hand, the selection of hyperparameters, which provide finer controls of ML algorithms, may enable a less intrusive approach to influence the fairness. Can hyperparameters amplify or suppress discrimination present in the input dataset? How can we help programmers in detecting, understanding, and exploiting the role of hyperparameters to improve the fairness?
We design three search-based software testing algorithms to uncover the precision-fairness frontier of the hyperparameter space. We complement these algorithms with statistical debugging to explain the role of these parameters in improving fairness. We implement the proposed approaches in the tool Parfait-ML (PARameter FAIrness Testing for ML Libraries) and show its effectiveness and utility over five mature ML algorithms as used in six social-critical applications. In these applications, our approach successfully identified hyperparameters that significantly improve (vis-a-vis the state-of-the-art techniques) the fairness without sacrificing precision. Surprisingly, for some algorithms (e.g., random forest), our approach showed that certain configuration of hyperparameters (e.g., restricting the search space of attributes) can amplify biases across applications. Upon further investigation, we found intuitive explanations of these phenomena, and the results corroborate similar observations from the literature.
The key theme in Dr. Tizpaz-Niari research is to automate the process of finding and explaining bugs, vulnerabilities, and fairness issues in large-scale software and machine learning systems. In particular, he is interested in analyzing a differential class of properties that include confidentiality, privacy, and fairness. The findings help discover multiple performance bugs in popular ML libraries such as scikit-learn, fairness bugs in the state-of-the-art machine learning algorithms, and timing side-channel vulnerabilities in critical Java libraries such as OpenJDK and Apache (,,,,). I lead the Responsible, Informative, and Secure Computing Lab (RISC) at UTEP.
Resolving the Privacy Paradox: Towards Trustworthy and Collaborative AI
When: Friday, February 4th 2022, 10:00 – 11:00 AM
Where: CCSB 1.0410
As an immense number of connected devices such as mobile devices, wearables, and autonomous vehicles generate massive amounts of data each day to develop machine learning (ML) based intelligent services, multiple spheres of human life, such as healthcare, entertainment, and industry are being transformed. The traditional process for developing machine learning applications is to gather a large dataset, train a model on the data, and run the trained model on a cloud server. Due to the growing tension between the need for big data and the need for privacy protection, it is increasingly attractive to enable edge devices to collaboratively train ML models while keeping the data locally. However, the deployment of this collaborative ML architecture depends on a set of challenges, such as the new privacy risks, limited resources, heterogeneous data and devices, and security vulnerabilities. In this talk, I will cover the privacy-preserving collaborative ML approaches that address the new privacy risks in collaborative ML. I will also discuss my observations and responses to other key challenges, as well as important research directions, with respect to the development of collaborative ML.
Rui Hu received the B.Eng. degree in Electrical Engineering from Jinan University, China, in 2017. She is currently a Ph.D. candidate in the Department of Electrical and Computer Engineering at the University of Texas at San Antonio. Her research interest includes security and privacy, machine learning, edge computing, and Internet-of-Things. She was awarded the Dorrough Distinguished Graduate Fellowship and Graduate Student Professional Development Awards in 2018 and 2019, respectively.
Robots, Language, and Representations
Dr. Nakul Gopalan
When: Tuesday, February 1st, 2022, 3:00 – 4:00 PM
Where: Prospect Hall, Room 234
Robots are increasingly present in our lives, from cleaning our houses to automating logistics. However, these robots are still present in our lives as solitary agents, performing structured tasks, without the power to collaborate and learn with humans.
A key challenge here is that robots perceive the world and operate in it using sensors and actuators that are continuous, low-level and noisy. However, people on the other hand, reason, plan, specify and teach tasks using high-level concepts without worrying about the low-level continuous nature of the world. To address this challenge, I develop computational methods that firstly, allow robots to learn representations, and skills to solve novel tasks. Moreover, these methods, and representations also enable robots to be taught and programmed using natural language communication, allowing robots to understand a human partner's intent.
In this talk I first demonstrate how representations for planning and language understanding can be learned together to follow commands in novel environments. In the second part of the talk, I demonstrate a more practical approach in which language can be grounded to pre-trained deep policy representations to solve novel task specifications.
Together, these approaches empower robots to learn unstructured tasks via language and demonstrations. I will then discuss the implications of such approaches in collaborative task solving with robots in homes, offices and industries.
Nakul Gopalan is a postdoctoral researcher in the CORE Robotics Lab with Prof. Matthew Gombolay at Georgia Tech. He completed his PhD at Brown University's Computer Science department in 2019. Previously he was a graduate student in Prof. Stefanie Tellex's H2R lab at Brown. His research interests lie at the intersection of language grounding and robot learning. Nakul has developed algorithms and methods that allow robots to be trained by leveraging demonstrations and natural language descriptions. Such learning would improve the usability of robots within homes and offices. His other research interests are in hierarchical reinforcement learning and planning. His work has received a best paper award at the RoboNLP workshop at ACL 2017.
Modeling the 3D Human Genome Structure
Dr. Oluwatosin Oluwadare
When:Thursday, January 27th, 2022, 10:00-11:00 am
Where: Prospect Hall, Room 234
Where: Prospect Hall, Room 234
Where: Prospect Hall, Room 234
Nineteen years after the sequencing of the human genome and 20 years after the introduction of Chromosome Conformation Capture (3C) technologies, three-dimensional (3D) inference and big data remain problematic in the field of genomics, and specifically, in the 3C data analysis research area. Chromosome 3D structure inference involves reconstructing a genome’s 3D structure or, in some cases, an ensemble of structures from contact interaction frequencies extracted from Hi-C experiments. Further questions remain about chromosome topology and structure; enhancer promoter interactions; location of genes, gene clusters, and transcription factors; the relationship between gene expression and epigenetics; and chromosome visualization at a larger scale. Oluwatosin Oluwadare’s research interests lie at the intersection of computer science— particularly artificial intelligence (AI) and data mining—and biological studies—particularly genomics. With a focus on developing machine learning and data mining methods to analyze big biomedical data and address fundamental problems in biomedical sciences. In this presentation, he will discuss his research about human genome 3D structure reconstruction from high throughput sequencing data (Hi-C). Also, he will present the algorithms “3DMax” and “CBCR” for chromosome and genome 3D structure prediction from Hi-C data. The talk will also highlight “GenomeFlow,” a comprehensive GUI tool to facilitate the entire modeling, analysis, and visualization of the 3D genome organization.
Dr. Oluwatosin Oluwadare is an Assistant Professor of Computer Science and Innovation at the University of Colorado, Colorado Springs (UCCS). He received his Bachelor of Technology degree in Computer Science (CS) from the Federal University of Technology, Akure, Nigeria, his Master of Science degree in CS from the University of Texas, Arlington, and his Ph.D. in CS from the University of Missouri, Columbia. Dr. Oluwadare’s research focus areas are Bioinformatics & Computational Biology, Machine Learning, Deep Learning, and Big data analytics. He has proposed and been published in reputable journals and has developed novel methods focused on machine learning applications in bioinformatics. Dr. Oluwadare is the Director of the Oluwadare Lab research group at UCCS, where he and his students focus on developing computational methods to address complex biological questions. More details about his research and research group can be found at https://academics.uccs.edu/~ooluwada/.
Dr. Oluwadare has a keen interest in researching in machine learning and its various applications. He proposed and led the development of a software app called EyeCYou; this app uses AI to provide the facial description of a person to the visually impaired. Learn More about EyeCYou here: http://eyecyouapp.com/. The app is freely available for download on the apple app store.
Towards Reliable Machine Learning
When: Friday, January 21st, 2022, 10:00-11:00 am
Where: Prospect Hall, Room 234
Deep Neural Networks (DNN) represent a performance-hungry application. Floating-Point (FP) and custom floating-point-like arithmetic satisfies this hunger. While there is need for speed, inference in DNNs does not seem to have any need for precision. Many papers experimentally observe that DNNs can successfully run at almost ridiculously low precision. Whenever fast FP is absent, such as on embedded controllers, Fixed-Point (FxP) Arithmetic gets drawn in. FxP however has its own particular challenges.
The aim of this talk is many-fold: first, to shed some theoretical light upon why a DNN's FP accuracy stays high for low FP precision. We observe that the loss of relative accuracy in the convolutional steps is recovered by the activation layers, which are extremely well-conditioned. We give an interpretation for the link between precision and accuracy in DNNs.
Second, in this talk, we would like to give an overview on the principles of error analysis for FP and FxP arithmetic. We shall derive the basic machine epsilon from the IEEE754 FP definitions and demonstrate its use in different error models. We shall introduce interval arithmetic as a tool for computing without "forgetting" about error. We shall also comment on what challenges stem from the historical way IEEE754 is written and what we see as possible solutions these historical hindrances.
Third, the talk presents a software framework for semi-automatic FP and FxP error analysis for the inference phase of deep-learning. Compatible with common Tensorflow/Keras models, it leverages the frugally-deep Python/C++ library to transform a neural network into C++ code in order to analyze the network's need for precision. This rigorous analysis is based an Interval and Affine arithmetics to compute absolute and relative error bounds for a DNN. The tool is able to determine the Most Significant Bit (MSB) position of every FxP variable in a given DNN model, in order to ensure that no overflow can occur while maintaining appropriate output accuracy. While word-length is currently still fixed, this is feature of our tool is unique. We demonstrate our tool with several examples.
Finally, we shall provide an outlook on what we think is still missing for completely Reliable Machine Learning.
Computer Vision for Advancing 3D Immersive Applications
Dr. Kevin Desai
When: Wednesday, December 8, 2021, 3:00-4:00pm
Where: Prospect Hall, Room 234
Deep Neural Networks (DNN) represent a performance-hungry application. Floating-Point (FP) and custom floating-point-like arithmetic satisfies this hunger. While there is need for speed, inference in DNNs does not seem to have any need for precision. Many papers experimentally observe that DNNs can successfully run at almost ridiculously low precision. Whenever fast FP is absent, such as on embedded controllers, Fixed-Point (FxP) Arithmetic gets drawn in. FxP however has its own particular challenges. The aim of this talk is many-fold: first, to shed some theoretical light upon why a DNN's FP accuracy stays high for low FP precision. We observe that the loss of relative accuracy in the convolutional steps is recovered by the activation layers, which are extremely well-conditioned. We give an interpretation for the link between precision and accuracy in DNNs. Second, in this talk, we would like to give an overview on the principles of error analysis for FP and FxP arithmetic. We shall derive the basic machine epsilon from the IEEE754 FP definitions and demonstrate its use in different error models. We shall introduce interval arithmetic as a tool for computing without "forgetting" about error. We shall also comment on what challenges stem from the historical way IEEE754 is written and what we see as possible solutions these historical hindrances. Third, the talk presents a software framework for semi-automatic FP and FxP error analysis for the inference phase of deep-learning. Compatible with common Tensorflow/Keras models, it leverages the frugally-deep Python/C++ library to transform a neural network into C++ code in order to analyze the network's need for precision. This rigorous analysis is based an Interval and Affine arithmetics to compute absolute and relative error bounds for a DNN. The tool is able to determine the Most Significant Bit (MSB) position of every FxP variable in a given DNN model, in order to ensure that no overflow can occur while maintaining appropriate output accuracy. While word-length is currently still fixed, this is feature of our tool is unique. We demonstrate our tool with several examples. Finally, we shall provide an outlook on what we think is still missing for completely Reliable Machine Learning.
Kevin Desai is an Assistant Professor of Instruction in the Computer Science department at the University of Texas at San Antonio. He received his PhD degree in Computer Science from The University of Texas at Dallas in May 2019 with his dissertation titled – “Quantifying Experience and Task Performance in 3D Serious Games”. He also received his MS in Computer Science from The University of Texas at Dallas in May 2015, whereas his Bachelor of Technology in Computer Engineering from Nirma University (India) in June 2013. Dr. Desai’s research experience and interests are in the fields of Computer Vision and Immersive (Virtual / Augmented / Mixed) Realities with applications in the domains of healthcare, rehabilitation, virtual training, and serious gaming. He conducts interdisciplinary research which mainly revolves around the real-time capture and generation of 3D human models and their incorporation in collaborative 3D immersive environments. Dr. Desai’s work has been published in peer-reviewed international conferences in the fields of computer vision (e.g., ICIP), VR / AR / MR (e.g., ISMAR), and Multimedia (e.g., MMSys, ISM, BigMM, ICME). He also serves as a program committee member and reviewer for top-tier international journals and conferences in IEEE, ACM, and Springer.
A Dimensional Model of Interaction Style Variation in Spoken Dialog
Dr. Ward, Professor, UTEP CS Department
Time: Friday, November 19th 2:30-3:30pm
Location: CCSB 1.0202
In spoken dialog people vary their interaction styles, and dialog systems should also be able to do the same. Previous work has elucidated many aspects of style variation and adaptation, but a general model has been lacking. We applied Principal Component Analysis over 84 novel features that encode the frequencies of diverse interaction-related prosodic behaviors. The top 8 resulting dimensions together explained most of the variance, and were each interpretable. Some dimensions represented well-known aspects of style (degree of engagement, talkativeness, factuality, etc.) and others novel ones (positive vs negative, change-oriented vs accepting, etc.). Further, surprisingly, we found that individuals generally exhibited a wide variety of styles: using individual style tendencies to predict behavior outperformed an speaker-independent model by only 3.6%.
This talk reports work presented at SigDial 2021, plus new results from work with Jonathan E. Avila.
Predictive Intelligence to Enable Continuous Software Re-engineering and Preventive Maintenance
Dr. Badreddin, Associate Professor, UTEP CS Department
When: Friday, November 12th, 2:30-3:30pm
Where: CCSB 1.0202
By predicting how software codebases evolve over time, software engineers can address maintenance and reliability issues before they manifest. The talk presents a recent collaborative NSF proposal along with early results. The talk covers broad topics, including data mining, machine learning, predictive analysis, and software engineering.
A key challenge in designing long-living software systems is the unpredictable nature of the software evolution. A sound design deemed effective today may result in a rapidly deteriorating unsustainable codebase over time. Reengineering activities that aim to address existing deficiencies may contribute to quality and reliability degradation in the long term. This challenge is particularly evident in experimental and research software systems. Scientists developing research software often follow domain-specific and knowledge-driven discovery processes. Research software is also subject to unique budgetary constraints and schedules. This, among other factors, further increase the unpredictability of the software evolution and can complicate its maintenance and sustainability.
If it is possible to predict quality and reliability issues early, then engineers can design software that avoids or minimizes the impact of these yet-to-come reliability and quality issues. Engineers can continuously re-design software as evolutionary scenarios become more probable. Moreover, it becomes possible to proactively address a reliability or maintenance issue even before it materializes.
Early results show that basic machine learning algorithms are highly effective in predicting key code quality and reliability metrics. The talk will highlight these results, and will present a recent NSF proposal submitted in collaboration with Dr. Hossain.
Fourier Transform and Other Quadratic Problems under Interval Uncertainty
Dr. Kreinovich, Associate Professor, UTEP CS Department
When: Friday, November 5, 2021, 11:00 AM
Where: CCSB G.0208
Computers are used to estimate the current values of physical quantities and to predict their future values – e.g., to predict tomorrow’s temperature. The inputs x1, …, xn for such data processing come from measurements (or from expert estimates). Both measurements and expert estimates are not absolutely accurate: measurement results Xi are, in general, somewhat different from the actual (unknown) values xi of the corresponding quantities. Because of these differences Xi -- xi (called “measurement errors”), the result Y = f(X1,…,Xn) of data processing is also somewhat different from the actual value of the desired quantity y – at least from the value y = f(x1,…,xn) that we would have obtained if we knew the exact values xi of the inputs.
In many practical situations, the only information that we have about measurement uncertainty is the upper bound Di on the absolute value of each measurement error. In such situations, if the measurement result is Xi, then all we know about the actual value xi of the corresponding quantity is that this value is in the interval [Xi – Di, Xi + Di]. Under such interval uncertainty, it is desirable to know the range of possible value of y.
In general, computing such a range is NP-hard already for quadratic functions f(x1,…,xn). Recently, a feasible algorithm was proposed for a practically important quadratic problem – of estimating the absolute value (modulus) of Fourier coefficients. In this talk, we show that this feasible algorithm can be extended to a reasonable general class of quadratic problems.
Dr. Kreinovich's main area of expertise is dealing with uncertainty and imprecision. There are two main aspects to uncertainty and imprecision. The first aspect is that data comes from measurements, and measurements are never absolutely accurate. It is important to analyze how this uncertainty affects our predictions, and how to make decisions under such uncertainty. Dr. Kreinovich has collaborated with specialists in radio astronomy, geoscience, environmental science, and other areas of biology.
Assessment of Impacts of Ambient Intonation Boosting on High Functioning Individuals with Prosody and Behaviors Stereotypical of Autism
Dr. Freudenthal, Associate Professor, UTEP CS Department
When: Friday, October 8, 2021, 11:00 AM
Where: CCSB G.0208
Ambient Vocal Intonation Boosting is a recently developed extension of Vocal Intonation Boosting that modifies the acoustic resonance of a room to boost people’s awareness of the intonation in their own and others’ voices. Vocal intonation boosting (VIB) was initially developed to help people sing on-pitch. It has also been observed to eliminate or reduce the intensity of several socially problematic speech patterns stereotypical of autism spectrum disorders in high functioning individuals. This effect, which is consistent with current understandings of neuro-psychology has not yet been rigorously examined.
This presentation will begin with a description of intonation boosting our anecdotal observations of its effects, and short summary of relevant research in neuroscience. The remainder of the talk will describe experiments being planned to more rigorously characterize the behavioral effects of VIB on neurotypical populations with idiosyncratic behaviors stereotypical of autism. This work is being planned in collaboration with faculty in education-pysch, rehabilitation counseling, and psychology. My hope is that this presentation will elicit feedback that will be useful in refining these experimental plans.
Dr. Kreinovich's main area of expertise is dealing with uncertainty and imprecision. There are two main aspects to uncertainty and imprecision. The first aspect is that data comes from measurements, and measurements are never absolutely accurate. It is important to analyze how this uncertainty affects our predictions, and how to make decisions under such uncertainty.
Personal Data Practices and Problems
Marissa Stephens, Googler in Residence
When: Friday, September 24th, 2021, 11:00 AM
Where: CCSB G.0208
This talk will discuss the difference between using data for individual personalization versus aggregated metrics, why user data is so important to keep safe, and cases to consider when designing your systems that process user data.
Marissa Stephens is a Senior Software Engineer at Google, where she has worked for the past 5 years since graduating from MIT with a degree in Computer Science and Electrical Engineering. Her work on the Discover Recording Team focuses on collecting, processing, and storing user data in a system that is flexible to accommodate new laws and regulations, scaleable to billions of users, and adaptable to future product interactions, all while maintaining user trust and safety.
Using neural networks to make practical local schemes
Dan DeBlasio, UTEP CS Department
When: Friday, September 10th, 2021, 11:00 AM
Where: CCSB G.0208
Sequence fingerprinting has long been used to compare large numbers of text-based objects; be they web pages, documents, or genomes. As the sets of objects we are searching over continues to grow there is urgency in the need for improved efficiency of these fingerprinting methods. Local schemes (sometimes called local algorithms) are a category of methods that choose a fingerprint, substring of length k (k-mer), from a window of the original sequence, a continuous sequence of w k-mers. The term local is used because they make the determination of choosing the representative from a window without any external information about the rest of the sequence. This provides several useful properties, the most consequential of which are: any two strings with a long enough exactly matching substring (one that covers the whole window) will have a shared fingerprint, the one for the shared window(s); and, when defined as is done in Schleimer et al., the fingerprints selected are guaranteed to be not-to-far apart (called the winnowing guarantee). We plan to make some of the first systematic efforts to uncover how to find, store, and use, practical and general local schemes by leveraging recent advances in active and reinforcement learning.
This work is in its infancy, and the purpose of this presentation is to inform the department of the planned activities and solicit feedback in techniques that can be helpful and/or identify pitfalls before they occur. The work is a continuation of work originally performed with co-authors Guillaume Marçais and Carl Kingsford at CMU.
Dan DeBlasio is currently an Assistant Professor in the Computer Science Department at the University of Texas at El Paso (UTEP). He was a Lane Fellow of the Computational Biology Department in the School of Computer Science at Carnegie Mellon University where he works in Carl Kingsford’s group. He received his PhD in Computer Science from the University of Arizona in 2016 under John Kececioglu. He holds an MS and BS in Computer Science from the University of Central Florida working with Shaojie Zhang. He recently published a book on his work titled “Parameter Advising for Multiple Sequence Alignment”. Dan also recently finished a two year appointment to the Board of Directors of the International Society for Computational Biology and is an advisor to the ISCB Student Council where he has held several roles.
Aspects of Identity for human-machine dialogue
David Traum, University of Southern California
When: Friday, August 20th, 2021, 1:30 PM
Where: CCSB G.0208
Identity is what distinguishes each of us as individuals, and also contains aspects that we have in common with others. Aspects of identity are often expressed in conversation, and sometimes are even more prominent in important parts of the dialogue than the task under discussion or specific task roles that conversation participants engage in. Our hypothesis is that identity is important also for human-machine dialogue, particularly where the dialogue agent is engaging in human-like activities or portraying a human. In this talk we will present preliminary work to express and model aspects of identity, such as personal attributes, backstory, interaction style preferences, and storytelling, such that agents (including embodied agents) can express their identity and elicit and react to human expressions of identity in a number of human-machine dialogue domains, including talking to recorded video of real people, disaster-relief training, internet of things instructions, and others.
David Traum is the Director for Natural Language Research at the Institute for Creative Technologies (ICT) and Research Professor in the Department of Computer Science at the University of Southern California (USC). He leads the Natural Language Dialogue Group at ICT. Traum’s research focuses on Dialogue Communication between Human and Artificial Agents. He has engaged in theoretical, implementational and empirical approaches to the problem, studying human-human natural language and multi-modal dialogue, as well as building a number of dialogue systems to communicate with human users. Traum has authored over 250 refereed technical articles, is a founding editor of the Journal Dialogue and Discourse, has chaired and served on many conference program committees, and is a past President of SIGDIAL, the international special interest group in discourse and dialogue. Traum earned his Ph.D. in Computer Science at the University of Rochester in 1994.
How Artificial Intelligence and Connected Technologies Make Mobility Efficient
Dr. Mina Sartipi, University of Tennessee
Host: Dr. Shirley Moore, UTEP CS Department
When: Friday, April 23th, 2021, 9:00 AM
Where: Zoom meeting link will be sent via e-mail to the csstudents list.
Urbanization creates significant pressure on many vital city mobility systems. While traditional modes of transportation are critical, smart cities are shifting toward deploying more sustainable, efficient, and accessible mobility solutions that improve the quality of life of citizens. Examples include shared mobility, autonomy, and connectivity that increase accessibility, reduce the chance of accident, improve incident response, reduce fuel consumption, and improve air quality. In this talk, we investigate how data, connectivity, and machine learning technologies can improve the traffic flow safely, while minimizing environmental impact.
Dr. Mina Sartipi is the Founding Director of the Center for Urban Informatics and Progress (CUIP) at the University of Tennessee at Chattanooga (UTC), where she is also a Guerry Professor in the Computer Science and Engineering Department. Her research, funded by NSF, NIH, DOE, State of the Tennessee, Lyndhurst Foundation, and other industry organizations, focuses on data-driven approaches to tackle real-world challenges in smart city applications focused on mobility, energy, and health. At CUIP, she coordinates cross-disciplinary research and strategic visions for urbanism and smart cities advancement with a focus on people and quality of life. She received her BS in Electrical Engineering from Sharif University of Technology, Tehran, Iran, in 2001 and her MS and PhD degrees in Electrical and Computer Engineering from Georgia Tech in 2003 and 2006, respectively.
Dr. Sartipi was named a 2019 Chattanooga Influencer by the Edge, Chattanooga’s business magazine, for her role in smart city research and collaboration with city, county, and industry partners. She is recipient of several awards including 2016 UTC Outstanding Faculty Research and Creative Achievement award, UC Foundation Professorship, and 2020 Smart 50 awards in Digital Transformation at the Smart Cities Connect (in collaboration with City of Chattanooga and EPB). She has delivered several keynotes and presentations including presentations to the US Congressional Caucus on Smart Cities, the Smart Cities Connect conference, and the National Transportation Training Directors. Dr. Sartipi has been an IEEE senior member since 2016. She serves on the board of directors for startups and non-profit organizations. In her spare time, she enjoys traveling and outdoor activities including hiking, rock climbing, and skiing with her husband and two young daughters.
Explainable AI Based on the Equivalence Between Takagi-sugeno Fuzzy Systems and Neural Networks With ReLU Activation
Dr. Barnabas Bede, DigiPen Institute of Technology
When: Friday, February 5th, 2021, 3:00 PM
Where: Zoom meeting link will be provided through institute mail.
The recent successes of machine learning and artificial intelligence are widely due to the usage of neural networks and deep learning. Interpretability of neural networks, however, has not been studied as extensively as their applications. Fuzzy systems are based on easily interpretable linguistic rules, but they have been less extensively used in applications compared to neural networks. In the present talk we show that Takagi-Sugeno fuzzy systems with triangular membership functions, under certain conditions, are equivalent with neural networks with ReLU activation. Based on this equivalence we propose a new neural network architecture based on a Takagi-Sugeno fuzzy system with triangular membership function. The proposed system is capable of deep learning, using the backpropagation algorithm. The interpretability of the system is discussed, together with its compatibility with other neural network architectures.
Dr. Barnabas Bede earned his Ph.D. in Mathematics from Babes-Bolyai University of Cluj-Napoca, Romania. His research interests include Fuzzy Sets, Fuzzy Logic and Image Processing. He serves as editorial board member of the journals Fuzzy Sets and Systems and Information Sciences. He joined DigiPen in 2011. Before that, he held positions in Romania, Hungary, University of Texas at El Paso and University of Texas-Pan American. At DigiPen he develops class material for Fuzzy Sets and various topics in Mathematical Analysis.
Differentially Private Machine Learning for Intelligent Systems
Xinyue Zhang, University of Houston
When: Wednesday, February 3, 2021, 2:15PM
Where: Zoom meeting link may be obtained by contacting the CS Front Office or any faculty member .
Xinyue Zhang is currently pursuing a Ph.D. degree in the Department of Electrical and Computer Engineering, University of Houston. She received the B.E. degree in communication engineering from Beijing Jiaotong University, China, in 2016, and the B.Sc. degree in electronic engineering from KU Leuven, Belgium, in 2016. She has been a Research Assistant with the Cognitive Radio Networking, Cybersecurity, and Cyber-Physical System Laboratory since 2016. Her research interests include security and privacy, machine learning, and cyber-physical systems.
Marketing Analytics: Problem Spaces and Potential Solutions
Chao Cai, Google Ads
When: Friday, October 30th, 2020, 11:00 AM - 12:00 PM
Where: register through this RSVP form to receive the Google Meets link to use.
Businesses large and small face a common challenge around attracting new customers and retaining existing ones, with marketing as a core component in tackling this challenge. As customers and businesses move online, the amount of data available to inform and improve marketing decisions has grown significantly. In this talk we'll look at a high level overview of some of the technical challenges involved in making use of this growing set of data to improve marketing decisions and optimize toward business goals, as well as a sample of solutions explored.
Chao Cai-Engineering Director, SMB Ads.
Chao Cai is the engineering lead for Google's product efforts helping small and medium businesses (SMBs) grow by providing them with simple and effective advertising solutions. Previously, Chao led Google's conversion measurement, reporting, and attribution efforts across a number of advertiser products. In this role, he had led development in the past on products and features within Google Ads, Google Analytics, DoubleClick, and Google Tag Manager. Prior to this, Chao had focused on various display advertising efforts within AdSense and YouTube. He holds numerous patents across the areas of online advertising, web analytics, and conversion analysis.
From Quantum Computing to Computers of Generation Omega (an overview of the Fall 2020 class CS5354/CS4365)
Professor Vladik Kreinovich, UTEP CS
When: Friday, March 6th, 2020, 11:00 AM - 12:00 PM
Where: CCSB 1.0204
While modern computers are much faster than in the past, there are still many practical problems for which they are too slow. Since we have been unable to achieve a drastic speedup by using the traditionally used physical processes, a natural idea is to analyze whether using other physical processes can help. This analysis is the main topic of this class.
A natural idea is to find processes whose future behavior are computationally complex to predict. So, we will start by recalling the main definitions of computational complexity, such as worst-case time complexity, average time complexity, feasible algorithms, P and NP, and NP-hard problems.
Then we will analyze different physical phenomena from the viewpoint of their computational complexity. We start with probably the most realistic option -- quantum computing, and then move to the use of randomness in general, to the use of physicists' belief that every physical theory will eventually need to be modified, and to the use of physical processes with non-classical space-time models such as special and general relativity and the possibility of discrete space-time.
Multi-Modal User Interaction: Gesture + Speech using Augmented Reality Headsets
Francisco R. Ortega, Colorado State University
When: Friday, February 28th, 2020, 1:30 PM - 2:30 PM
Where: CCSB 1.0202
Multi-modal interaction, in particular gesture and speech, for augmented reality headsets is essential as this technology becomes the future of interactive computing. It is possible that in the near future, augmented reality glasses become pervasive and the preferred device. This talk will concentrate on the motivation behind gesture and speech user interaction, a recent study, and future work. The first part of the talk will describe a study where we demonstrated early and essential findings of gesture and speech user interaction. Findings include types of the gestures performed, the timing between gesturing and speech when used in multi-modality (130 milliseconds), workload (using NASA TLX), and a series of design guidelines resulting from this study. I will also describe the future direction of this research and collaborative multi-modal gesture interaction.
Dr. Francisco R. Ortega is an Assistant Professor at Colorado State University and Director of the natural user interaction lab (NUILAB). Dr. Ortega earned his Ph.D. in Computer Science (CS) in the field of Human-Computer Interaction (HCI) and 3D User Interfaces (3DUI) from Florida International University (FIU). He also held a position of Post-Doc and Visiting Assistant Professor at FIU between February 2015 to July 2018. Broadly speaking, his research has focused on gesture interaction, which includes gesture recognition and elicitation. His main research area focuses on improving user interaction by (a) eliciting (hand and full-body) gesture sets by user elicitation, and (b) developing interactive gesture-recognition algorithms. His secondary research aims to discover how to increase interest for CS in non-CS entry-level college students via virtual and augmented reality games. His research has resulted in multiple peer-reviewed publications in venues such as ACM ISS, ACM SUI, and IEEE 3DUI, among others. He is the first-author of Interaction Design for 3D User Interfaces: The World of Modern Input Devices for Research, Applications, and Game Development book by CRC Press. Dr. Ortega serves as Vertically Integrated Projects coordinator that promotes applied research for undergraduate students across disciplines.
Differentially Private Computation for Cyber Physical Systems
Sai Mounika Errapotu, UTEP ECE
When: Friday, February 21st, 2020, 11:00 AM - 12:00 PM
Where: BUSN 318
Cyber Physical System (CPS) have infiltrated into many areas such as aerospace, automobiles, chemical processing, civil infrastructure, energy, healthcare, transportation, entertainment, and consumer appliances due to their tight integration of computation and networking capabilities to monitor and control the underlying systems. Many domains of CPS such as smart metering, sensor/data aggregation, crowd sensing, traffic control etc., typically collect huge amounts of individual information for data analysis and decision making, therefore privacy is a serious concern in CPS. Most of the traditional approaches protect the privacy of individual’s data by employing trusted third parties or entities for data collection and computation. An important challenge in these large-scale distributed applications is how to protect the privacy of the participants during computation and decision making, especially when such third party entities are untrusted. This talk focuses on differential privacy based secure computation that guarantees individual privacy in presence of untrusted computing entities. Since confidential information must not be inappropriately released, and the use of untrusted information must not corrupt trusted computation and the utility, this talk discusses on privacy-accuracy tradeoffs of differentially private computation in some of the state-of-the-art applications by considering application-specific information security requirements.
Software Reliability Engineering: Algorithms and Tools
Vidhyashree Nagaraju, University of Massachusetts Dartmouth (Faculty candidate)
When: Friday, January 31st, 2020, 9:00 AM - 10:00 AM
Where: CCSB 1.0202
While there are many software reliability models, there are relatively few tools to automatically apply these models. Moreover, these tools are over two decades old and are difficult or impossible to configure on modern operating systems, even with a virtual machine. To overcome this technology gap, we are developing an open source software reliability tool for the software and system engineering community. A key challenge posed by such a project is the stability of the underlying model fitting algorithms, which must ensure that the parameter estimates of a model are indeed those that best characterize the data. If such model fitting is not achieved, users who lack knowledge of the underlying mathematics may inadvertently use inaccurate predictions. This is potentially dangerous if the model underestimates important measures such as the number of faults remaining or overestimates the mean time to failure (MTTF). To improve the robustness of the model fitting process, we have developed expectation conditional maximization (ECM) algorithms to compute the maximum likelihood estimates of nonhomogeneous Poisson process (NHPP) software reliability models. This talk will present an implicit ECM algorithm, which eliminates computationally intensive integration from the update rules of the ECM algorithm, thereby achieving a speedup of between 200 and 400 times that of explicit ECM algorithms. The enhanced performance and stability of these algorithms will ultimately benefit the software and system engineering communities that use the open source software reliability tool.
Vidhyashree Nagaraju is a PhD candidate in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth (UMassD), where she received her MS (2015) in Computer Engineering. She received her BE (2011) in Electronics and Communication Engineering from Visvesvaraya Technological University in India.
Differential Debugging for Side Channel Vulnerabilities
Saeid Tizpaz Niari, University of Colorado (Faculty Candidate)
When: Monday, January 27th, 2020, 4:00 PM - 5:00 PM
Where: Prospect Hall, room 324
In early 2018, Meltdown and Spectre attacks challenged the security of any computer devices globally. These attacks exploit timing information to compromise the confidential information of users. While most existing debugging techniques provide supports for the functional correctness, the support for non-functional properties such as information leaks via timing observations is scarce. In this talk, Tizpaz-Niari will showcase a range of tools and techniques to detect, explain, and mitigate side-channel vulnerabilities in large-scale libraries and web applications. The technique combines tools from gray-box fuzzing, dynamic program analysis, and machine learning inferences. The talk also presents a novel technique to adapt neural network models for quantifying the amounts of information leaks.
Saeid Tizpaz-Niari is currently a PhD Candidate in the ECEE department at the University of Colorado Boulder. His research interests are at the intersection of Software Security, Machine Learning, and Verification. He is the first author of multiple publications in top tier AI, Security, and Verification conferences. In 2018, he received the Gold Research Award from the ECEE department at CU Boulder. In addition, he won second prize for his submission to the First Microsoft Open Source Challenge.
Learning Interpretable Features by Tensor Decomposition
Shah Muhammad Hamdi, Georgia State University (faculty candidate)
When: Wednesday, January 22nd, 2020, 4:00 PM - 5:00 PM
Where: Classroom Building, room C205
Representation learning of the nodes in a graph has facilitated many downstream machine learning applications such as classification, clustering, and visualization. Existing algorithms generate less interpretable feature space for the nodes, where the roles of the features are not understandable. This talk covers the use of multi-dimensional arrays or tensors in node embedding. I will explain how tensor decomposition-based node embedding algorithms consider local and global structural similarities of the nodes, learn the proximity itself, require less number of tunable hyperparameters, and generate a feature space where the feature roles are understandable, while working on different types of static networks. In addition to the social networks, I will show another application in the neuroscience domain, more specifically on the brain networks found from the resting-state fMRI data of healthy and disabled subjects, where nodes represent brain regions, and edges represent functional correlation among them. I will discuss the use of tensor decomposition in the representation learning of the biomarkers of the neurological diseases, which are the discriminative nodes and edges of the brain networks that can distinguish the healthy population from the disabled population. I will demonstrate some experimental findings on social networks and brain networks, and the potentials of this approach in one research problem of solar physics, which is the multi-variate time series-based solar flare prediction.
Shah Muhammad Hamdi is a PhD candidate in the Department of Computer Science of Georgia State University. His research interests are machine learning, data mining and deep learning, more specifically, finding interesting patterns from real-life graphs and time-series data. His research finds applications in the fields of social networks, neuroscience, and solar physics. He has publications in top data mining conferences such as IEEE ICDM, ACM CIKM, and IEEE Big Data. He worked as a data scientist intern at Amazon Web Services Inc. (AWS) and LexisNexis Risk Solutions. Before starting his PhD, he worked as a Lecturer in Computer Science in Northern University Bangladesh, Dhaka, Bangladesh. He received his Bachelor's degree in Computer Science in 2014 from Rajshahi University of Engineering and Technology (RUET), Rajshahi, Bangladesh.
Research Advances and Opportunities in Scalable High Performance Computing Systems and Applications
Dr. Shirley Moore, ORNL
When: Friday, January 17th, 11:00-12:30
Where: CCSB 1.0702
High performance computing (HPC) systems are being transformed from relatively self-contained homogeneous multiprocessor systems to large-scale distributed and networked heterogeneous systems. To address the approaching end of Moore’s Law and Dennard scaling, HPC systems are increasingly incorporating specialized accelerators and new memory and communication technologies. The use of HPC systems is expanding beyond traditional scientific simulation applications to end-to-end coupled workflows that integrate machine learning and artificial intelligence. This talk will discuss recent work in evaluating future technologies, including processor-in-memory (PIM), field programmable gate arrays (FPGAs), and quantum computers. We will also discuss recent efforts and future opportunities in integrating edge computing and machine learning into HPC systems and applications.
Shirley Moore is a Senior Computer Scientist in the Future Technologies Group in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL). Her research interests are in performance evaluation and modeling of emerging hardware and software technologies. She leads the ORNL efforts in several areas of the Department of Energy Exascale Computing Project, including Hardware Evaluation, Proxy Applications, and Application Assessment. She is a Co-PI or senior personnel on a number of research projects, including three quantum computing projects. She has also mentored several high school, undergraduate and graduate student interns while at ORNL.
Workflow Engines: Benefits and Challenges
Logan Chadderdon, Google and UTEP
When: 11 AM - 12 PM Friday, November 15th, 2019
Where: Business room 318
Workflow engines can help drive efficiency and correctness when dealing with complex interactions between systems and/or people. They are complex and multi-faceted, and often under-utilized, but they can serve as a critical piece of a system's architecture. This talk will cover what workflow engines are at a high level, what benefits they can bring in certain scenarios, and what challenges arise when designing, implementing, or using a workflow engine at scale.
Logan Chadderdon (email@example.com) is a software engineer at Google. He graduated from the University of Arizona, and then went to work on an internal workflow engine (frontend, backend, and related tooling) at Google for five years and counting in Mountain View, CA. He has a passion for building great products and learning/teaching technology. During the Fall 2019 semester he is teaching a CS1 course at UTEP as a Googler in Residence.
Leveraging HPC for Research and Education
Jerry Perez, Ph.D., UT Dallas
When: 2 PM - 3 PM Friday, November 8th, 2019
Where: CCSB 1.0202
High Performance Computing (HPC) enhances research and funding opportunities at universities that leverage it. HPC can create opportunities for learning and facilitating education improving the state of the art in STEM classrooms across the university campus and beyond. In this talk, we will explore examples of HPC that may provide increased scholarship and funding at the university and answer the following questions: How can HPC be integrated with my class? What are some examples? How can HPC increase research funding? How can research funding increase HPC? Where can I find HPC computing resources beyond my campus?
Jerry Perez holds a Ph.D. in Information Systems from the Nova Southeastern University and an M.B.A. from the Wayland Baptist University. He is Director of Cyber-Infrastructure Operations and High Performance Computing at the Office of Information Technology at UT Dallas. He was Adjunct Professor of Practice and Texas Tech University and the Wayland Baptist University, Senior Research Associate of High Performance Computing, Texas Tech University and a Computer Architecture Consultant. His research interests include Information Systems design and deployment, Supercomputing systems design and deployment, Quantum Information Systems, Information Technology management, Big Data Analytics for security, and Internet of Things and IoT security.
Amy Wagler (firstname.lastname@example.org) or Natalia Villanueva Rosales (email@example.com).
Evaluating Usability of Permissioned Blockchain for Internet-of-Battlefield Things
Abel Gomez, UTEP CS, PhD. Program
When: 11:00 AM - 12:00 PM Friday, October 18
Where: CCSB 1.0202
Military technology is ever-evolving to increase the safety and security of soldiers on the field while integrating Internet-of-Things solutions to improve operational efficiency in mission oriented tasks in the battlefield. Centralized communication technology is the traditional network model used setup for battlefields and is vulnerable to denial of service attacks, therefore suffers performance hazards. They also lead to a central point of failure, due to which, a flexible model that is mobile, resilient, and effective for different scenarios must be proposed. Blockchain is a customizable platform that allows multiple nodes to update a distributed ledger. The decentralized nature of the system suggests that it can be an effective tool for battlefields in securing data communication among Internet-of-Battlefield Things (IoBT). In this paper, we integrate a permissioned blockchain, namely Hyperledger Sawtooth, in IoBT context and evaluate its performance with a goal of determining whether it is has the potential to serve the performance needs of IoBT environment. Using different testing parameters, the metric data would help in suggesting the best parameter set, network configuration and blockchain usability views in IoBT context. It is found that blockchain integrated IoBT platform has heavy dependency on the characteristics of the underlying network such as topology, link bandwidth, jitter, etc., which can be tuned up to achieve optimal performance.
Myths and Misconceptions about Using Social Media Data for Health Research
Dr. Graciela Gonzalez, University of Pennslyvania
When: 2pm - 3pm, Friday, October 11,2019
Where: CCSB 1.0202
The total number of users of social media continues to grow worldwide, resulting in the generation of vast amounts of raw data direct from consumers. Popular social networking sites such as Facebook, Twitter and Instagram dominate this sphere. According to estimates, 500 million tweets and 4.3 billion Facebook messages are posted every day. A Pew Research Report on Social Media estimates that nearly half of adults worldwide and two-thirds of all American adults (65%) use social networking. The report states that of the total users, 26% have discussed health information, and, of those, 30% changed behavior based on this information and 42% discussed current medical conditions. Advances in automated data processing, machine learning and Natural Language Processing present the possibility of utilizing this massive data source for biomedical and public health applications, if researchers adequately address the methodological challenges unique to this media. Despite numerous published studies, however, myths and misconceptions persist about the suitability and adequate use of these data, impacting the perception of researchers, institutional review boards, and the general public on the validity of the studies for health research. In this talk, we will discuss (and hopefully debunk!) some of the more poignant myths and misconceptions, based on close to 10 years and 25 publications on the subject.*
Dr. Gonzalez Hernandez is a recognized expert and leader in natural language processing (NLP) applied to bioinformatics, medical/clinical informatics, and public-health informatics. She is an Associate Professor of Informatics in Biostatistics and Epidemiology at the University of Pennsylvania where she established the Health Language Processing Lab within the Institute of Biomedical Informatics. Her recent work focuses on NLP applications for public-health monitoring and surveillance and is funded by R01 grants from the National Library of Medicine and the National Institute of Allergy and Infectious Diseases. Her work on social media mining for pharmacovigilance has resulted in 25 publications in prestigious conferences and journals. Her work on enriching geospatial information for phylogeography uses NLP for the automatic extraction of relevant geospatial data from the literature and for linkage to GenBank records.
Contact: Natalia Villanueva Rosales (Computer Science), firstname.lastname@example.org
* This work was funded by the National Institutes of Health (NIH) National Library of Medicine (NLM) grant number R01LM011176. The content is solely the responsibility of the authors and does not necessarily represent the views of the NIH or NLM.
Prosody Research and Applications: The State of the Art
Nigel Ward, Ph.D.
When: 11:00 AM - 12:00 PM Friday, September 6
Where: CCSB 1.0202
Prosody is the musical aspects of speech: beyond the words said, the properties of pitch, loudness, timing and so on. Prosody is essential in human interaction and relevant to every area of speech science and technology. This talk will be a survey for non-specialists of current developments. It will illustrate the issues and advances using recent findings about the prosodic constructions of English, describe ways to exploit prosody for applications including speech recognition, speech synthesis, dialog systems, and the inference of speaker states and traits, and finally discuss remaining challenges.
This talk will also be presented later next month as a Survey Presentation at Interspeech 2019 in Graz, Austria.
Persistent Threats, Active Defense: Cybersecurity Practices Today
Dr. Anthony Caldwell, Pramerica Ireland
When:Friday May.10. 2019 @ 10 AM
Where: CCSB 1.0202
Ethical hacking or more broadly, information security, is a dynamic field in which the pace of change appears to leave the cybersecurity professional in a position where they are playing catch-up. Over the last ten years, persistent vulnerabilities and security related issues have plagued many industries and have forced cultural changes oriented around cybersecurity within organizations across the world. Within such an energetic working environment, this talk will illuminate some of the issues encountered in the field and how they have been dealt with from the practitioner perspective. An overview of issues associated with the detection and remediation of vulnerabilities such as cross site scripting (XSS), business email compromise and clickjacking. Also discussed are key security principles implemented in the industrial context and the broader area of threat perception. How might end user and technical training be carried out and how adaptations to our test methodologies are needed in order to provide the best service possible.
Dr. Anthony Caldwell is a cybersecurity engineer for the DevSecOps service in Pramerica Ireland where he specializes in dynamic application security testing. He is a member of OWASP, a certified ethical hacker (CEH), a System Security Certified Practitioner (SSCP) and a founder member of the cybersecurity services offered by Pramerica since its inception in 2010. Given the prevalence and frequency of attacks perpetrated by threat agents across the globe, Anthony helped to transform the way Prudential understands and deals with information security issues, creating many of the techniques and processes used within the organization today. Combined with carrying out security testing he has conducted numerous in-house and external talks to wide variety of audiences ranging from academic to non-technical and has published twelve professional articles in areas such as information security, ethical hacking and digital forensics. Dr. Caldwell joined Pramerica in 2001 as a QA engineer working in mainframe technology and progressing to client-server across numerous business units in Prudential. Prior to joining Prudential in 2001, Dr. Caldwell began his career working for Intel as a device engineer where he tested the Pentium processor, then AOL Time Warner as a beta tester for their broadband service.
Dr. Caldwell holds a MSc in atomic physics, is a member of the Institute of Physics and has carried out PhD research in the fields of information systems research and science education. In the field of information systems research, Dr. Caldwell focused on the application of the Technology Acceptance Model in order to establish end users’ intentions towards the usage of an online learning platform. His work in the area of science education applied and extended Third-Space Theory within the context of small independent companies that demonstrate scientific principles to schools, museums and science festivals in the UK, Northern Ireland and Republic of Ireland. He is also a part-time lecturer and tutor with Dublin City University, Queen's University Belfast, Letterkenny Institute of Technology and is a volunteer with the Donegal Youth Service as a mathematics tutor for the underprivileged.
UTEP host: Somdev Chatterjee; Talk host: Dr. Badreddin
Vidi Opus: A Startup to Revolutionize Agriculture with Innovative Technologies
When: Friday, April .26. 2019 @ 10AM
Where: Location Business 312
Mr. Moya will speak about some of the key challenges in tracing food and related products as they move through supply chains, starting with the farmer/rancher and finishing at the consumer. He will also speak about the complexity of food supply system, and provided an overview of key challenges and their manifestation in food recalls. He will finish by talking about Vidi Opus and CattleCast startups that aim to address some of the aforementioned challenges.
Jonas Moya is an entrepreneur from Tucumcari, New Mexico and the founder of Vidi Opus. Jonas has held many roles in the industry, ranging from a cattle rancher, dryland farmer, livestock deputy inspector, and agricultural researcher. Through these different roles, Jonas has been able to identify many trends and challenges that are consistent with the multiple industries that make up agriculture. That is when he founded Vidi Opus. Vidi Opus is a tech company looking to creating new disruptive technologies designed to meet the needs of agricultural industries.
The Future of Robotic Exploration of the Universe: A Systems Engineering Perspective
Dr. Maged Elaasar, JPL (NASA, Caltech)
Friday, March 29, Time 10-11AM
Location CCSB G.0208
The Jet Propulsion Laboratory's (JPL) mission is to explore the universe with the aid of advanced robotic systems. These missions necessitate advanced systems engineering methods to achieve its goals safely, reliably, efficiently, and systematically. At JPL, we operate at the cutting edge of systems engineering at all levels of the mission
In this talk, Dr. Elaasar will articulate some desirable characteristics of a modern systems engineering practice. He will present architectural principles that adhering to can enhance the prospects of improving those characteristics. He will then present recent work that aims at realizing those architectural principles through a software system called Open CAESAR, which is being used by various space projects at JPL. Open CAESAR employs techniques from
Dr. Maged Elaasar is a Senior Software Systems Architect at NASA’s Jet Propulsion Laboratory (JPL) at the California Institute of Technology (CalTech). He leads a JPL wide strategic R&D program named Integrated
Time to Gather Stones
Dr. Vladik Kreinovich
Friday, March 15th 10:00- 11:00 AM
Business Building Room 302
TIME TO GATHER STONES. Many heuristic methods have been developed in intelligent computing. Researchers have proposed many new exciting ideas. Some of them work well, some don’t work so well. And promising techniques — that work well — often benefit from trial-and-error tuning. It is great to know and use all these techniques, but it is also time to analyze why some technique work well and some don’t. Following the Biblical analogy, we have gone through the time when we cast away stones in all directions, when we developed numerous seemingly unrelated ideas. It is now time to gather stones, time to try to find the common patterns behind the successful ideas. Hopefully, in the future, this analysis will help to replace the time-consuming trial-and-error optimization with more efficient techniques.
CASE STUDIES. In this class, we will mainly concentrate on three classes of empirically successful semi-heuristic methods that do not yet have a full theoretical explanation:
* fuzzy techniques, techniques for translating expert knowledge described in terms of imprecise (“fuzzy”) natural-language words like “small” into precise numerical strategies;
* neural networks (in particular, deep neural networks), techniques for learning a dependence from examples; and
* quantum computing, techniques that use quantum effects to make computations faster and more reliable.
Toward fluent collaboration in human-robot teams
Dr. Tariq Iqbal, MIT
February 11, 4:30-5:30
Classroom Building room 305
Robots currently have the capacity to help people in several fields, including health care, assisted living, and manufacturing, where the robots must share physical space and actively interact with people in teams. The performance of these teams depends upon how fluently all team members can jointly perform their tasks. In order to successfully act within a group, a robot requires the ability to monitor other members' actions, model interaction dynamics, anticipate future actions, and adapt its own plans accordingly. To achieve that, I develop human-team inspired algorithms for robots to fluently coordinate and collaborate with people in complex, real-world environments by modeling how people interact among themselves in teams and by utilizing that knowledge to inform robots' actions.
In this talk, I will present algorithms to measure the degree of coordination in groups and approaches to extend these understandings by robots to enable fluent collaboration with people. I will first describe a non-linear method to measure group coordination, which takes multiple types of discrete, task-level events into consideration. Building on this method, I will present two anticipation algorithms to predict the timings of future actions in teams. Finally, I will describe a fast online activity segmentation algorithm which enables fluent human-robot collaboration.
Tariq Iqbal is a postdoctoral associate in the Interactive Robotics Group at MIT. He received his Ph.D. from the University of California San Diego, where he was a member of the Contextual Robotics Institute and the Healthcare Robotics Lab. His research focuses on developing algorithms for robots to solve problems in complex, real-world environments, which enable robots to perceive, anticipate, adapt, and fluently collaborate with people in teams.
Toward building an automated bioinformatician: Parameter advising for improved scientific discovery
Dr. Dan DeBlasio, Carnegie-Mellon University
Thursday, Feb 14, 4:30-5:30 Classroom Building, Room C305
Modern scientific software has a large number of tunable parameters that need to be adjusted to ensure computational performance and accuracy of the results. When these parameter choices are made incorrectly we may overlook significant results or falsely report insignificant ones. Optimizing the parameter choices for one input may not provide an assignment that's good for another, so this parameter optimization process typically needs to be repeated for each new piece of data. Standard machine learning methods for solving this problem need to repeatedly run the software which may not be suitable in practice. Because of the time consumption required to optimize parameters and the possible loss of accuracy that can result when chosen incorrectly, the default parameter vector that are provided by the tool developer is often used. These defaults are designed to work well on average, but most interesting cases are rarely “average”.
In this talk, I will describe my first steps in automatically learning the correct program configuration for biological applications using a framework we call “Parameter Advising”. To apply this framework to the problem of multiple sequence alignment we developed an accuracy estimator, called Facet, to help choose alignments since no ground truth is available in practice. When we use Facet for advising on the Opal aligner we boost accuracy by 14.6% on the hardest-to-align benchmarks. For the reference-based transcript assembly problem, when applying parameter advising to the Scallop assembler we see an increase in accuracy of 28.9%. The framework is general and can be extended to other problems in computational biology and beyond. I will discuss possible areas where parameter advising could be used to automatically learn to run complex analysis software
Dan DeBlasio is currently a Lane Fellow of the Computational Biology Department in the School of Computer Science at Carnegie Mellon University where he works in Carl Kingsford’s group.
He received his PhD in Computer Science from the University of Arizona in 2016 under John Kececioglu. He holds an MS and BS in Computer Science from the University of Central Florida
working with Shaojie Zhang. He recently published a book on his work titled “Parameter Advising for Multiple Sequence Alignment”. Dan also recently finished a two year appointment to
the Board of Directors of the International Society for Computational Biology and is an advisor to the ISCB Student Council where he has held several roles.
Relativistic Effects Can Be Used to Achieve a Universal Square-Root (Or Even Faster) Computation Speedup
Dr. Vladik Kreinovich
Friday, February 1, 10:00am - 11:00am
In this talk, we show that special relativity phenomenon can be used to reduce computation time of any algorithm from T to square root of T. For this purpose, we keep computers where they are, but the whole civilization starts moving around the computer – at an increasing speed, reaching speeds close to the speed of light. A similar square-root speedup can be achieved if we place ourselves near a growing black hole. Combining the two schemes can lead to an even faster speedup: from time T to the 4-th order root of T.
Artificial Intelligence Approaches for Wickedly Hard National Security Problems
Dr. David Tauritz
Monday, February 4, 4:30-5:30
Many national security problems are wickedly hard in that they map to computational problem classes which are intractable. This seminar aims to illuminate how artificial intelligence approaches can be created to address these problems and produce useful solutions. In particular, two promising approaches will be discussed, namely (I) computational game theory employing coevolutionary algorithms for identifying high-consequence adversarial strategies and corresponding defense strategies, and (II) hyper-heuristics employing evolutionary computation for the automated design of algorithms tailored for high-performance on targeted problem classes.
The first approach will be illustrated with the Coevolving Attacker and Defender Strategies for Large Infrastructure Networks (CEADS-LIN) project funded by Los Alamos National Laboratory (LANL) via the LANL/S&T Cyber Security Sciences Institute (CSSI) [ https://web.mst.edu/~tauritzd/CSSI/]. This project focuses on coevolving attacker & defender strategies for enterprise computer networks. A proof of concept for operationalizing cyber security R&D from this project demonstrated in simulation that coevolution is capable of implementing a computational game theory solution for adversarial models of network security. Currently a high-fidelity emulation framework with intelligent attacker and defender agents is being developed with as end goal to provide a fully automated solution for identifying high-impact attacks and corresponding defenses.
The second approach will be illustrated with the Scalable Automated Tailoring of SAT Solvers project funded by Sandia National Laboratories with supplemental funding from the Computer Research Association’s Committee on the Status of Women in Computing Research (CRA-W), and with the Network Algorithm Generating Application (NAGA) project funded via CSSI. These projects show how hyper-heuristics can be employed to create algorithms targeting arbitrary but specific problem classes for repeated problem solving where high a priori computation costs can be amortized over many problem class instances.
Daniel Tauritz is an Associate Professor & Associate Chair in the Department of Computer Science at the Missouri University of Science and Technology (S&T), a University Contract Scientist for Sandia National Laboratories, a University Collaboration Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. His research interests focus on artificial intelligence approaches to complex real-world problem solving with an emphasis on national security problems in areas such as cyber security, cyber physical systems, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.