This is a cache of https://www.uibk.ac.at/informatik/forschung/lunchtime-seminar/events/wise-2020-21-sose-2020/index.html.en. It is a snapshot of the page at 2024-11-28T06:51:00.762+0100.
Lunchtime Seminar Archive WiSe 2020/21 & SoSe 2021

Lunchtime Seminar

Archive WiSe 2020/21 & SoSe 2021

 

Certified termination analysis

Lecturer: Maximilian Haslbeck, Researcher at CL - University of Innsbruck

Date: 24.06.2021

Abstract:
When developing software every programmer has written non terminating code by accident and then spent a lot of time and effort trying to figure out where the code starts to loop indefinitely. In order to help a developer catch these bugs early on there exist numerous automatic analysis tools that can deduce if a program is terminating or not. But are the analysis tools bug free and can one trust their output? To solve this problem, we developed a framework for certified termination analysis which I will present in this talk.


Autonomous robot manipulation for planetary exploration

Lecturer: Renaud Detry, UCLouvain, Belgien

Date: 17.06.2021

Abstract:
I will discuss the experimental validation of autonomous robot behaviors that support the exploration of Mars' surface, lava tubes on Mars and the Moon, icy bodies and ocean worlds, and operations on orbit around the Earth. I will frame the presentation with the following questions: What new insights or limitations arise when applying algorithms to real-world data as opposed to benchmark datasets or simulations? How can we address the limitations of real-world environments—e.g., noisy or sparse data, non-i.i.d. sampling, etc.? What challenges exist at the frontiers of robotic exploration of unstructured and extreme environments? I will discuss our approach to validating autonomous machine-vision capabilities for the notional Mars Sample Return campaign, for autonomously navigating lava tubes, and for autonomously assembling modular structures on orbit. The talk will highlight the thought process that drove the decomposition of a validation need into a collection of tests conducted on off-the-shelf datasets, custom/application-specific datasets, and simulated or physical robot hardware, where each test addressed a different range of experimental parameters for sensing/actuation fidelity, breadth of environmental conditions, and breadth of jointly-tested robot functions.


CodeAbility Austria: Digitally Supported Programming Education at Austrian Universities

Lecturer: Michael Breu, Researcher at QE - University of Innsbruck

Date: 10.06.2021

Abstract:
Teaching programming at universities is currently undergoing a major transition from traditional “paper-based” problem statements and manual corrections to a supportive integrated solution infrastructure. Such an infrastructure provides support for students to solve their assignments, provide automated feedback for exercise submission, and relieve teachers to focus on fine grain feedback to the solutions. In the project “CodeAbility Austria” seven Austrian Universities collaborate to bring programming education to the next level. This comprises several aspects, as e.g. the piloting of a common programming learning platform, the development of “standard” course templates for teaching various programming languages (Python, Java, …), and a sharing platform for teaching materials. The project is accompanied by empirical research and cooperations with partner projects within the digitalization initiative of the Federal Ministry of Science, Education and Research. In this talk we want to give an insight into the current state of CodeAbility and enable other teachers to appraise the results for their own teaching.


An elephant, a baby and a ballerina visit a computer vision lab....

Lecturer: John K. Tsotsos, York University, Toronto, Canada

Date: 27.05.2021

Abstract:
What would be the impact of a visit to a computer vision lab by an elephant, a baby and a ballerina?
No, this is not the opening line for a joke. I will describe these three visits, none of which were physical ones, but all represent real events that made large contributions to the lab's research program. The elephant provided a demonstration of one of the theoretical foundations of the Selective Tuning model of visual attention. The baby helped consolidate our position on machine learning. This position is novel and our work is only just beginning, but our approach is far-better grounded in human learning than any other approach. The ballerina highlighted our lab's focus on active perception with a nice demonstration of how  humans are active perceivers in a 3D world. This differs in very important ways from the passive, pixel-centric, 2D focus of the vast majority of current work. Active perception is complex and requires radical changes to empirical methodologies. As time permits, I will detail work that illustrates some of these concepts.


Forensicability of Deep Neural Network Inference Pipelines

Lecturer: Alex Schlögl, Researcher at SEC - University of Innsbruck

Date: 20.05.2021

Abstract:
We propose methods to infer properties of the execution environment of machine learning pipelines by tracing characteristic numerical deviations in observable outputs. Results from a series of proof-of-concept experiments obtained on local and cloud-hosted machines give rise to possible forensic applications, such as the identification of the hardware platform used to produce deep neural network predictions. Finally, we introduce boundary samples that amplify the numerical deviations in order to distinguish machines by their predicted label only.


Open question answering in temporal news article collections

Lecturer: Adam Jatowt, Researcher at DiSC - University of Innsbruck

Date: 06.05.2021

Abstract:
 The fields of automatic question answering and reading comprehension have been recently advancing quite rapidly. Open-domain question answering, in particular, assumes answering arbitrary user questions from large document collections such as Wikipedia. We can already observe open-domain question answering in practice when web search engines answer directly our questions instead of requiring us to read through the returned search results. This talk will be about our latest efforts in automatic question answering over temporal news collections which typically contain millions of news articles published during several decades long time frames. To locate the correct answer in such collections one needs first to find candidate documents that may contain the answer. We propose a re-ranking approach for news articles by utilizing temporal information embedded in documents and in the collection, thus combining solutions from Temporal Information Retrieval and Natural Language Processing.


ATLAS: Automated amortised complexity analysis of self-adjusting data structures

Lecturer: Georg Moser, Researcher at TCS - University of Innsbruck

Date: 29.04.2021

Abstract:
Being able to argue about the performance of self-adjusting data structures such as splay trees has been a main objective, when Sleator and Tarjan introduced the notion of *amortised* complexity. Analysing these data structures requires sophisticated potential functions, which typically contain logarithmic expressions. Possibly for these reasons, and despite the recent progress in automated resource analysis, they have so far eluded automation. In this talk, I'll report on the first fully-automated amortised complexity analysis of self-adjusting data structures. This is joint work with Lorenz Leutgeb, David Obwaller and Florian Zuleger.


Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks

Lecturer: David Peer, Researcher at IIS - University of Innsbruck

Date: 22.04.2021

Abstract:
Designing neural network architectures is a challenging task and knowing which specific layers of a model must be adapted to improve the performance is almost a mystery. In this talk, I will describe a novel method that we developed to identify layers that decrease the test accuracy of trained models. More precisely, we identified those layers that worsen the performance because they produce conflicting training bundles. I will show theoretical and empirical why the occurrence of conflicting bundles during training decreases the accuracy of neural networks. Based on these findings, I will also describe a novel neural-architecture-search algorithm that we introduced to remove performance decreasing layers automatically already at the beginning of the training. Finally, I will show that it's possible with the same method to remove around 60% of the layers of an already trained residual neural network with no significant increase in the test error.


3D reconstruction with focus on minimally-invasive surgery

Lecturer: Stefan Spiss, Researcher at IGS - University of Innsbruck

Date: 15.04.2021

Abstract:
In recent years, new endoscopic multi-camera system that provide omnidirectional views were developed to improve navigation and orientation and to reduce the amount of blind spots not visible with standard endoscopic cameras. Videos of interventions recorded with such camera systems can be used to create new ways of interactive training media. Especially, for more advanced interactions like free viewpoint selection, 3D data of the surgical scene is required. Therefore, 3D reconstruction with a 360° multi-camera system was developed in this thesis to lay the basis for such training media. Since multi-camera systems at endoscopic scale are not publicly available yet, a commercial 360° camera system was used instead. Depth data is calculated for neighboring pairs of cameras with overlapping field of views using stereo vision. The resulting point clouds are merged into one large reconstruction of the whole environment. Moreover, tracking of objects with known geometry in the reconstructed environment was investigated in a second step. This was mainly motivated by the fact that usually 3D data of organs and anatomical parts from preoperative examination are available. Here the 3D object has to be manually registered in the reconstructed point cloud first. Then frame-wise tracking is performed by finding the updated objects position in each new point cloud using iterative closest point. In the test scene a small 3D printed object (∼ 80 mm) was successfully tracked along a circular motion (radius: ∼ 109 mm) within an mean error of about 2 mm even at transitions between different stereo pairs.


Machine learning in decoding and driving clinical cancer care

Lecturer: Dalibor Hrg, Researcher at DBIS - University of Innsbruck & CCB Medical University of Innsbruck

Date: 18.03.2021

Abstract:
Artificial intelligence (AI) and machine learning (ML) play a role in many deployed decision systems today. Explaining, in a human-understandable way, the relationship of input and output of ML models is essential to the trustworthiness of such systems in high impact areas such as finance or medicine. In radiology and oncology as branches of medicine and clinical practice, as well as in pharma and IT companies, AI/ML is increasingly used for predictive modeling with genomic and/or diverse imaging data (radiomics) in order to understand patient survival, cancer treatment responses, or to stratify patients for clinical benefit. Chemotherapy is a major treatment modality utilized by oncologists in clinics, next to operation, radiotherapy or in combination with immunotherapies. Nevertheless, across all cancer types, patient responses to drugs are low, and systematically not yet understood - an open problem in "Deep Medicine". Advances in interpretable machine learning have not yet been fully utilized, opening opportunity for engineering new curative drug combinations and driving curative cancer care.


The Matterhorn meets Big Data - 15 years of sensing in extreme environments with the PermaSense project

Lecturer: Jan Beutel, Researcher at NES - University of Innsbruck

Date: 11.03.2021

Abstract:
In this talk we will review a decade+ of experience in applications of low-power wireless sensor networks enabling novel geoscience and natural hazard mitigation approaches. We will take a journey to the environment most susceptible to climate change - the mountain cryosphere - to understand how technological advances of the 21st century created opportunities by enabling measurements that have previously been impossible to obtain. Better quality data, obtained online and in near-realtime does not only constitute a virtue and enabler but equally calls for new method and tool development. In the latter part of this talk we look ahead highlighting current challenges in sensor design and the analysis of heterogeneous data sets using state-of the-art data science tools.


ndzip: A high-throughput parallel lossless compressor for scientific data

Lecturer: Fabian Knorr, Researcher at DPS - University of Innsbruck

Date: 04.03.2021

Abstract:
Applications in High Performance Computing (HPC) commonly need to exchange large volumes of floating-point data. Fast, specialized compression schemes can speed up such workloads by reducing the time spent waiting for data transfers. We introduce ndzip, a high-throughput lossless compressor for multidimensional grids of single- and double-precision floating point data. With a parallelization scheme efficiently targeting modern SIMD-capable multicore processors, it achieves a compression throughput close to main memory bandwidth, significantly outperforming existing schemes. We evaluate the compressor using a representative set of scientific data, demonstrating a competitive trade-off between compression effectiveness and throughput.


Cloud readiness assessment of large-scale IT landscapes

Lecturer: Matthias Farwick, Txture - The Cloud Transformation Platform

Date: 14.01.2021

Abstract:
 In this lunchtime seminar I will outline the current state of the global transformation of classic data center IT operations to the cloud. After that I will discuss the reasoning of why even highly regulated organizations, such as banks, are moving towards public cloud. Finally, this talk will outline the key challenges of assessing large-scale IT landscapes with thousands of applications for their cloud readiness, and how our cloud transformation software Txture tackles this problem.


 eNNclave: Offline Inference with Model Confidentiality

Lecturer: Alex Schlögl, Researcher at SEC - University of Innsbruck

Date: 17.12.2020

Abstract:
Outsourcing machine learning inference creates a confidentiality dilemma: either the client has to trust the server with potentially sensitive input data, or the server has to share his commercially valuable model. Known remedies include homomorphic encryption, multi-party computation, or placing the entire model in a trusted enclave. None of these are suitable for large models. For two relevant use cases, we show that it is possible to keep all confidential model parameters in the last (dense) layers of deep neural networks. This allows us to split the model such that the confidential parts fit into a trusted enclave on the client side. We present the eNNclave toolchain to cut TensorFlow models at any layer, splitting them into public and enclaved layers. This preserves TensorFlow’s performance optimizations and hardware support for public layers, while keeping the parameters of the enclaved layers private. Evaluations on several machine learning tasks spanning multiple domains show that fast inference is possible while keeping the sensitive model parameters confidential. Accuracy results are close to the baseline where all layers carry sensitive information and confirm our approach is practical.


Using learning classifier systems for the DSE of adaptive embedded systems

Lecturer: Fedor Smirnov, Researcher at DPS - University of Innsbruck

Date: 10.12.2020

Abstract:
Modern embedded systems are not only becoming more and more complex but are also often exposed to dynamically changing run-time conditions such as resource availability or processing power requirements. This trend has led to the emergence of adaptive systems which are designed using novel approaches that combine a static off-line Design Space Exploration (DSE) with the consideration of the dynamic run-time behavior of the system under design. In contrast to a static design approach, which provides a single design solution as a compromise between the possible run-time situations, the off-line DSE of these so-called hybrid design approaches yields a set of configuration alternatives, so that at run time, it becomes possible to dynamically choose the option most suited for the current situation. However, most of these approaches still use optimizers which were developed for a purely static design. Consequently, modeling complex dynamic environments or run-time requirements is either not possible or comes at the cost of a significant computation overhead or results of poor quality. As a remedy, this talk introduces Learning Optimizer Constrained by ALtering conditions (LOCAL), a novel optimization framework for the DSE of adaptive embedded systems. Following the structure of Learning Classifier System (LCS) optimizers, the proposed framework optimizes a strategy, i.e., a set of conditionally applicable solutions for the problem at hand, instead of a set of independent solutions. We show how the proposed framework—which can be used for the optimization of any adaptive system—is used for the optimization of dynamically reconfigurable many-core systems and provide experimental evidence that the hereby obtained strategy offers superior embeddability compared to the solutions provided by a s.o.t.a. hybrid approach which uses an evolutionary algorithm.


Computationally weak quantum systems and why they matter

Lecturer: Ralph Bottesch, Researcher at CL - University of Innsbruck

Date: 03.12.2020

Abstract:
Several classes of weak quantum-computational systems have been studied. These systems make use of quantum-mechanical effects, but are nevertheless no more powerful than classical computers. I will give an overview of this sub-field of quantum computing and explain why it is important for both theory and practice.


Service-driven goal-oriented dialog systems

Lecturer: Umutcan Simsek, Researcher at STI - University of Innsbruck

Date: 26.11.2020

Abstract:
Following the recent developments in Machine Learning, conversational agents have become increasingly ubiquitous. The goal-oriented dialog systems (e.g., Intelligent Personal Assistants such as Amazon Alexa and Google Assistant) now enable the consumption of data and services via natural language dialogs. However, the scalable development of such a dialog system is challenging since each new capability added to the dialog system (e.g., via web services) can require a significant development effort. This talk will report on the semi-automated generation of goal-oriented dialog systems based on lightweight semantic descriptions of web services. A web service is modeled as a set of schema.org actions that can be taken on resources. These potential actions are then processed semi-automatically to generate and extend goal-oriented dialog systems dynamically. We demonstrate our approach with a proof of concept in the tourism domain. We generate a goal-oriented dialog system based on web services from scratch and show how various tasks are completed with conversations to achieve the user’s goal.


Universality in spin models, automata, and neural networks

Lecturer: Gemma de las Cuevas, Department of Theoretical Physics, University of Innsbruck

Date: 19.11.2020

Abstract:
Why is it so easy to generate complexity? I will argue that this is due to the phenomenon of universality — essentially every non-trivial system is universal, and thus able to explore all complexity in its domain. We understand the phenomenon of universality in spin models, automata and neural networks. I will explain the first step toward rigorously linking the first two. I will also talk about one of the consequences of universality, namely undecidability.


Task planning for robotics using object-centered predicates and action contexts

Lecturer: Alejandro Agostini, Researcher at IIS - University of Innsbruck

Date: 12.11.2020

Abstract:
Symbolic planning is a useful problem-solving paradigm that finds sequences of operators (or plans) that progressively transform the world state until a goal is achieved. It uses well-known artificial intelligence (AI) searching techniques that encode task structures into states and actions using a human-readable notation. This makes it particularly appealing for robotic executions of human-like tasks, allowing a lay person to naturally specify the task to a robot (e.g. set a table) while letting the robot automatically generate the sequence of instructions to complete it. However, symbolic planning and robotic techniques use different representations and search strategies, which poses serious difficulties when these paradigms should be combined into a single framework. Current approaches tackle this problem by defining ad-hoc solutions for particular laboratory conditions that lead to the generation of physically impractical plans or that require intensive computation to transform a plan into feasible robot motions. To address this problem we propose a symbolic domain representation that consistently encodes relevant geometric constraints to favour the generation of physically feasible plans. These constraints are described using an object-centered perspective that can be directly linked to robot sensing parameters (e.g. object poses) without handcrafting symbol-signal relations. For plan execution, we evaluate the context of a symbolic action in the plan to infer its actual intention, e.g. pick a bottle with the intention of pouring afterwards, which permits selecting suitable acting parameters for the generation of robot motions.


Test and validation of wireless IoT using testbeds

Lecturer: Jan Beutel, Researcher at NES - University of Innsbruck

Date: 05.11.2020

Abstract:
Networked embedded systems such as Wireless Sensor Networks, Internet of Things applications or Cyberphysical Systems are typically implemented on resource limited devices. The intricacies of systems operating at their resource limits, e.g. memory or power, the distributed nature and the tightly integrated within a certain application domain require to test and validate such systems in a realistic setting. To this extent testbeds have become the de-facto popular medium of choice. In this talk we will shortly review their evolution and then demonstrate several of our own developments leading to the most recent end-member: FlockLab 2, a second generation testbed sup-porting multi-modal, high-accuracy and high-dynamic range measurements of power and logic timing and at the same time in-situ debug and trace infrastructure of modern microcontrollers allowing for reproducible evaluation and benchmarking. We detail the architecture, provide a characterization and demonstrate the interface based on examples from academic research as well as real-world applications.


Inpainting with particle hydrodynamics

Lecturer: Viktor Daropoulos, Researcher at IGS - University of Innsbruck

Date: 22.10.2020

Abstract:
Image inpainting refers to an interpolation technique used to reconstruct a damaged or incomplete image by exploiting available image information. Restoring the missing image data, in a visually plausible manner, is a challenging task since it is an ill-posed inverse problem. The main goal of this work is to perform the image inpainting process using the Smoothed Particle Hydrodynamics (SPH) technique, a meshfree approach, by exploiting a set of sparsely distributed image samples. Spatial and data optimization is performed and various isotropic and anisotropic kernels are assessed both on random and spatially optimized inpainting masks.


Understanding Different Risk Perceptions and Security Behaviours of Crypto-Asset Users

Lecturer: Svetlana Abramova, Researcher at SEC - University of Innsbruck

Date: 15.10.2020

Abstract:
This talk will cover challenges of applying user studies in cryptocurrency research and present quantitative results from a survey of crypto-asset users conducted in 2020. The approach rests on established behavioural theories and cluster analysis. It accounts for the heterogeneity of crypto-asset users with a new typology that partitions the sample in three robust clusters of users. I will present the utility of the identified typology in better understanding individuals’ characteristics, security decisions and measures adopted by different users to preserve the secrecy of private keys.


HotPoW: Finality from proof-of-work quorums

Lecturer: Patrik Keller, Researcher at SEC - University of Innsbruck

Date: 08.10.2020

Abstract:
He will present his work on HotPoW, a permissionless consensus protocol that improves over Bitcoin in two ways. First, with parallel proof-of-work we are able to record k-times more puzzle solutions per time without changing the assumptions on the underlying network. Second, appropriate choice of k enables final transaction confirmations after a practical amount of time.


 

Nach oben scrollen