Mobirise

Groundhandling in front of my daughter.
(May 2019)


Viktor Mauch

- Diploma degree in physics at the Karlsruhe Institute of Technology (KIT).
- Since 2015: software developer at andrena objects.

I'm interested in computing, software development and everything in between and around these topics.

Before becoming a software engineer, I worked as a scientific assistant at the Institute of Experimental Particle Physics (ETP) and recently at the Steinbuch Centre for Computing (SCC). My research focused on High Performance Cloud Computing as a Service (HPCaaS).

My leisure activities include paragliding, chess and alpine hiking, which recently came into constant competition with my children and the allotment garden ;-).

And I love fruit salad more than anything.





Mobirise

Thank you for your support.

Website management takes time and other resources. Please support my work by using amazon.de purchases.

Publications

Hyper Link:
IGI Global

Abstract:
Modern applications for analysing 2D/3D data require complex visual output features which are often based on the multi-platform OpenGL® API for rendering vector graphics. Instead of providing classical workstations, the provision of powerful virtual machines (VMs) with GPU support in a scientific cloud with direct access to high performance storage is an efficient and cost effective solution. However, the automatic deployment, operation and remote access of OpenGL® API-capable VMs with professional visualization applications is a non-trivial task. In this chapter the authors demonstrate the concept of such a flexible cloud-like analysis infrastructure within the framework of the project ASTOR. The authors present an Analysis-as-a-Service (AaaS) approach based on VMware™-ESX for on demand allocation of VMs with dedicated GPU cores and up to 256 GByte RAM per machine.

BibTeX Code:
@article{mexner2017opengl,
title={OpenGL{\textregistered} API-Based Analysis of Large Datasets in a Cloud Environment},
author={Mexner, Wolfgang and Bonn, Matthias and Kopmann, Andreas and Mauch, Viktor and Ressmann, Doris and Chilingaryan, Suren A and Jerome, Nicholas Tan and van de Kamp, Thomas and Heuveline, Vincent and L{\"o}sel, Philipp and others},
journal={Design and Use of Virtualization Technology in Cloud Computing},
pages={161},
year={2017},
publisher={IGI Global}

ISBN: 978-1-61208-388-9, Pages 66-69

Hyper Link:
ResearchGate, Paper Award

Abstract:
Today, most high performance computing (HPC) systems are equipped with high-speed interconnects providing low communication and synchronization latencies in order to run tightly coupled parallel computing jobs. They are typically managed and operated by individual institutions and offer a fixed capacity and static runtime environment with a limited selection of applications, libraries and system software components. On the contrary, a cloud-based Infrastructure-as-a-Service (IaaS) model for HPC resources promises more flexibility, as it enables elastic on-demand provisioning of virtual clusters and allows users to modify the runtime environment down to the operating system level. The goal of this research effort is the general demonstration of a prototypic HPC IaaS system allowing automated provisioning of virtualized HPC resources while retaining high and predictable performance. We present an approach to use high-speed cluster interconnects like InfiniBand within an IaaS environment. Our prototypic system is based on the cloud computing framework Openstack in combination with the Single Root - I/O Virtualization (SR-IOV) mechanism for PCI device virtualization. Our evaluation shows that, with this approach, we can successfully provide dynamically isolated partitions consisting of multiple virtual machines connected over virtualized InfiniBand devices. Users are put in the position to request their own virtualized HPC cluster on demand. They are able to extend or shrink the assigned infrastructure and to change the runtime environment according to their needs. To ensure the suitability for HPC applications, we evaluate the performance of a virtualized cluster compared to a physical environment by running latency and High-Performance Linpack (HPL) benchmarks.

BibTeX Code:
@article{mauch2015deployment,
title={Deployment of Virtual InfiniBand Clusters with Multi-tenancy for Cloud Computing},
author={Mauch, Viktor},
journal={CLOUD COMPUTING 2015},
pages={81},
year={2015}

Hyper Link:
PCaPAC 2014, Session: FC02 - User Interfaces

Abstract:
Modern data analysis applications for 2D/3D data samples require complex visual output features which are often based on OpenGL, a multi-platform API for rendering vector graphics. They demand special computing workstations with a corresponding CPU and GPU power, enough main memory and fast network interconnects for a performant remote data access. For this reason, users depend heavily on available free workstations, both temporally and locally. The provision of virtual machines (VMs) accessible via a remote connection could avoid this inflexibility. However, the automatic deployment, operation and remote access of OpenGL-capable VMs with professional visualization applications is a non-trivial task. In this paper, we discuss a concept for a flexible analysis infrastructure that will be part in the project ASTOR, which is the abbreviation for “Arthropod Structure revealed by ultra-fast Tomography and Online Reconstruction”. We present an Analysis-as-a-Service (AaaS) approach based on the on-demand allocation of VMs with dedicated GPU cores and a corresponding analysis environment to provide a cloud-like analysis service for scientific users. 

BibTeX Code:
@article{mauch2014opengl,
title={OpenGL-BASED data analysis in virtualized self-service environments},
author={Mauch, Viktor and Bonn, M and Chilingaryan, S and Kopmann, A and Mexner, W and Ressmann, D},
journal={Proc. PCaPAC2014, http://jacow. org},
year={2014}
}

EAN: 4038858091822, Page 96-98

Hyper Link:
heise shop

Abstract:
Neben einer Reihe von Softwareprodukten zum Aufsetzen eines Cloud-Systems hat sich OpenStack etabliert. Diese Position verdankt das Open-Source-Werkzeug vor allem einem starken Engagement nahmhafter Organisationen und Firmen. Eine Analyse deckt die Struktur und Optionen auf.

BibTeX Code: 
@article{96--98|iX special 3/2012,
author = {Tobias Kurze, Viktor Mauch},
title = {Fahrt aufnehmen},
subtitle = {OpenStack – Open-Source-Software zur Cloud-Steuerung},
journal = {iX special},
volume = {3},
year = {2012},
pages = {96--98},

ISBN: 978-1-4503-1161-8, Article No. 9

Hyper Link:
ACM Digital Library


Abstract:
High Performance Computing (HPC) employs fast interconnect technologies to provide low communication and synchronization latencies for tightly coupled parallel compute jobs. Contemporary HPC clusters have a fixed capacity and static runtime environments; they cannot elastically adapt to dynamic workloads, and provide a limited selection of applications, libraries, and system software. In contrast, a cloud model for HPC clusters promises more flexibility, as it provides elastic virtual clusters to be available on-demand. This is not possible with physically owned clusters. In this paper, we present an approach that makes it possible to use InfiniBand clusters for HPC cloud computing. We propose a performance-driven design of an HPC IaaS layer for InfiniBand, which provides throughput and latency-aware virtualization of nodes, networks, and network topologies, as well as an approach to an HPC-aware, multi-tenant cloud management system for elastic virtualized HPC compute clusters.

BibTeX Code:
@inproceedings{hillenbrand2012virtual,
title={Virtual InfiniBand Clusters for HPC clouds},
author={Hillenbrand, M. and Mauch, V. and Stoess, J. and Miller, K. and Bellosa, F.},
booktitle={Proceedings of the 2nd International Workshop on Cloud Computing Platforms},
pages={9},
year={2012},
organization={ACM}

ISSN: 0935-9680, Page 72-73

Hyper Link:
heise online


Abstract:
Neben den kommerziellen Verwaltungshilfsmitteln für die Cloud haben sich inzwischen Open-Source-Werkzeuge etabliert. Großen Zuspruch fand OpenNebula, das jetzt in einer neuen Version vorliegt.

BibTeX Code:
@article{72--73|iX 4/2012,
author = {Viktor Mauch, Marcel Kunze},
title = {Freie Fahrt},
subtitle = {Upgrade der Cloud-Management-Plattform OpenNebula},
journal = {iX},
volume = {4},
year = {2012},
pages = {72--73},
}

ISSN: 0167-739X, Issue 6, Pages 1408–1416

Hyper Link:
ScienceDirect


Abstract:
Today’s high performance computing systems are typically managed and operated by individual organizations in private. Computing demand is fluctuating, however, resulting in periods where dedicated resources are either underutilized or overloaded. A cloud-based Infrastructure-as-a-Service (IaaS) approach for high performance computing applications promises cost savings and more flexibility. In this model virtualized and elastic resources are utilized on-demand from large cloud computing service providers to construct virtual clusters exactly matching a customer’s specific requirements.This paper gives an overview on the current state of high performance cloud computing technology and we describe the underlying virtualization techniques and management methods. Furthermore, we present a novel approach to use high speed cluster interconnects like InfiniBand in a high performance cloud computing environment.

BibTeX Code:
@article{mauch2012high,
title={High performance cloud computing},
author={Mauch, V. and Kunze, M. and Hillenbrand, M.},
journal={Future Generation Computer Systems},
year={2012},
publisher={Elsevier}
}

ISBN: 978-1-60750-802-1, Page 109-123

Hyper Link:
IOS Press Books Online


Abstract:
Using cloud technologies, it is possible to provision HPC services on-demand. Customers of the service are able to provision virtual HPC systems in a self-service portal and deploy and execute their specific application without operator intervention. The business model foresees to only charge the amount of resources actually used. There remain open questions in the area of performance optimization, advanced resource management, and fault tolerance. The Open Cirrus cloud computing testbed offers an environment in which we can treat these problems.

BibTeX Code:
@book{foster2011high,
title={High Performance Computing: From Grids and Clouds to Exascale},
author={Foster, I. and Gentzsch, W. and Grandinetti, L. and Joubert, G.R.},
year={2011},
publisher={Ios PressInc}
}

ISSN: 0170-6012, Page 242-254

Hyper Link:
SpringerLink


Abstract:
The paradigm of Cloud Computing has gained considerable interest in the past two years. Dynamic IT services are the basis of new business activities in the Internet. Public services are offered on a commercial basis, but they are very often proprietary with respect to technology and specifications. Open source solutions can be deployed in a private context and allow to construct an in-house cloud. Open source solutions furthermore allow to work on software and architecture projects in the community. Of special interest is compatibility to the interfaces of the Amazon Web Services as these form a de facto standard due to their widespread use and the existence of a broad range of management tools. The article discusses these aspects and highlights recent developments in the field.

BibTeX Code:
@article{baunprivate,
title={Private Cloud-Infrastrukturen und Cloud-Plattformen},
author={Baun, C. and Kunze, M. and Kurze, T. and Mauch, V.},
journal={Informatik-Spektrum},
pages={1--13},
publisher={Springer}
}

ISSN: 0935-9680, Page 48-52

Hyper Link:
heise online, iX-Archiv


Abstract:
Cloud-Systemen haftet der Ruch des Gewaltigen und Undurchschaubaren an. Trotzdem entstehen vermehrt Open-Source-Projekte, die durchaus in der Lage sind, solche Verbundsysteme aufzubauen und zu verwalten – wie OpenNebula.

BibTeX Code:
@article{48--51|iX 5/2011,
author = {Viktor Mauch, Marcel Kunze},
title = {Lichter Nebel},
subtitle = {Quelloffenes Projekt zur Cloud-Verwaltung: OpenNebula},
journal = {iX},
volume = {5},
year = {2011},
pages = {48--51},
}

Journal of Physics: Conference Series, Volume 331, Part 8

Hyper Link:
IOPscience: Journal of Physics


Abstract: 
An efficient administration of computing centres requires sophisticated tools for the monitoring of the local computing infrastructure. The enormous flood of information from different monitoring sources retards the identification of problems and complicates the local administration unnecessarily. The meta-monitoring system "HappyFace" offers elegant mechanisms to collect, process and evaluate all relevant information and to condense it into a simple rating visualisation, reflecting the current status of a computing centre. In this paper, we give an overview of the HappyFace architecture and selected modules.

BibTeX Code:
@inproceedings{mauch2011happyface,
title={The HappyFace Project},
author={Mauch, V. and others},
booktitle={Journal of Physics: Conference Series},
volume={331},
pages={082011},
year={2011},
organization={IOP Publishing}
}

Journal of Physics: Conference Series, Volume 219, Part 6

Hyper Link:
IOPscience: Journal of Physics


Abstract: 
An efficient administration of computing centres requires sophisticated tools for the monitoring of the local infrastructure. Sharing such resources in a grid infrastructure, like the Worldwide LHC Computing Grid (WLCG), goes ahead with a large number of external monitoring systems, offering information on the status of the services and user jobs at a grid site. This huge flood of information from many different sources retards the identification of problems and complicates the local administration. In addition, the web interfaces for the access to the site specific information are often very slow and uncomfortable to use. A meta-monitoring system which automatically queries the different relevant monitoring systems could provide a fast and comfortable access to all important information for the local administration. It becomes also feasible to easily correlate information from different sources and provides an easy access also for non-expert users. In this paper, we describe the HappyFace Project, a modular software framework for such purpose. It queries existing monitoring sources and processes the results to provide a single point of entrance for information on a grid site and its specific services.

BibTeX Code:
@inproceedings{büge2010site,
title={Site specific monitoring of multiple information systems--the HappyFace Project},
author={B{\"u}ge, V. and Mauch, V. and Quast, G. and Scheurer, A. and Trunov, A.},
booktitle={Journal of Physics: Conference Series},
volume={219},
pages={062057},
year={2010},
organization={IOP Publishing}
}

Address

Rintheimer Str. 2a
76131 Karlsruhe
Germany

Social Network
Contact
  • E-Mail: viktormauch@online.de
  • Phone: +49 (0) 721 9144947

Made with Mobirise site creator