### Refine

#### Document Type

- Conference Proceeding (80)
- Doctoral Thesis (17)
- Article (12)
- Master's Thesis (4)
- Bachelor Thesis (3)
- Study Thesis (2)
- Report (1)

#### Institute

- In Zusammenarbeit mit der Bauhaus-Universität Weimar (50)
- Institut für Strukturmechanik (20)
- Graduiertenkolleg 1462 (12)
- Professur Angewandte Mathematik (4)
- Juniorprofessur Augmented Reality (3)
- Juniorprofessur Stochastik und Optimierung (3)
- Professur Betriebswirtschaftslehre im Bauwesen (3)
- Professur Grundbau (3)
- Professur Informatik im Bauwesen (3)
- Institut für Europäische Urbanistik (2)

#### Keywords

#### Year of publication

- 2010 (119) (remove)

In the paper presented, reinforced concrete shells of revolution are analyzed in both meridional and circumferential directions. Taking into account the physical non-linearity of the material, the internal forces and the deflections of the shell as well as the strain distribution at the cross-sections are calculated. The behavior of concrete under compression is described by linear and non-linear stress-strain relations. The description of the behavior of concrete under tension must account for tension stiffening effects. A tri-linear function is used to formulate the material law of reinforcement. The problem cannot be solved analytically due to the physical non-linearity. Thus a numerical solution is formulated by means of the LAGRANGE Principle of the minimum of the total potential energy. The kinematically admissible field of deformation is defined by the displacements u in the meridional and w in the radial direction. These displacements must satisfy the equations of compatibility and the kinematical boundary conditions of the shell. The strains are linearly distributed across the wall thickness. The strain energy depends on the specific of the material behavior. Using integral formulations of the material law [1], the strain energy of each part of the cross-section is defined as a function of the strains at the boundaries of the cross-sections. The shell is discretised in the meridional direction. Various methods of numerical differentiation and numerical integration are applied in order to determine the deformations and the strain energy. The unknown displacements u and w are calculated by a non-restricted extremum problem based on the minimum of the total potential energy. From mathematical point of view, the objective function is a convex function, thus the minimum can be determined without difficulty. The advantage of this formulation is that unlike non-linear methods with path-following algorithms the calculation does not have to account for changing stiffness and load increments. All iterations necessary to find the solution are integrated into the “Solver”. The model presented provides many ways of investigating the influence of various material parameters on the stresses and deformations of the entire shell structure.

This paper deals with the modelling and the analysis of masonry vaults. Numerical FEM analyses are performed using LUSAS code. Two vault typologies are analysed (barrel and cross-ribbed vaults) parametrically varying geometrical proportions and constraints. The proposed model and the developed numerical procedure are implemented in a computer analysis. Numerical applications are developed to assess the model effectiveness and the efficiency of the numerical procedure. The main object of the present paper is the development of a computational procedure which allows to define 3D structural behaviour of masonry vaults. For each investigated example, the homogenized limit analysis approach has been employed to predict ultimate load and failure mechanisms. Finally, both a mesh dependence study and a sensitivity analysis are reported. Sensitivity analysis is conducted varying in a wide range mortar tensile strength and mortar friction angle with the aim of investigating the influence of the mechanical properties of joints on collapse load and failure mechanisms. The proposed computer model is validated by a comparison with experimental results available in the literature.

Die vorliegende Arbeit beschäftigt sich mit der geometrischen Suffosionsbeständigkeit von Erdstoffen. Mit dem wahrscheinlichkeitstheoretischen Ansatz der Perkolationstheorie wurde ein analytisches Verfahren gewählt, mit dem suffosive Materialtransportprozesse modelliert und quantifiziert werden können. Mit dem verwendeten Perkolationsmodell wurde eine beliebige Porenstruktur eines realen Erdstoffes im 3-Dimensionalen modelliert. Mögliche Materialtransportprozesse innerhalb der modellierten Porenstruktur wurden anschließend simuliert. Allgemein gültige Gesetzmäßigkeiten wurden hergeleitet und Grenzbedingungen formuliert. Diese sind vom Erdstoff unabhängig und beschreiben Zusammenhänge zwischen Materialtransport und Porenstruktur. Anwendbar sind diese Ergebnisse auf homogene, isotrope und selbstähnliche Erdstoffgefüge. Aussagen über konkrete Erdstoffe können über die Transformationsmethode erfolgen. Für die Verwendung der Transformationsmethode ist vorab die relevante Porenstruktur, d. h. die Porenengstellenverteilung, zu ermitteln.

Verkehrsmengenrisiko bei PPP-Projekten im Straßensektor - Determinanten effizienter Risikoallokation
(2010)

Trotz weltweit umfangreichen Erfahrungen mit Public Private Partnership Projekten im Straßensektor bleibt der Umgang mit dem Verkehrsmengenrisiko für die Projektbeteiligten eine Herausforderung. Die Arbeit widmet sich daher der wesentlichen Fragestellung nach einer effizienten Allokation dieses Risikos, dem nicht weniger Bedeutung zukommt als für den gesamtwirtschaftlichen Erfolg eines Straßenkonzessionsprojektes eine entscheidende Rolle zu spielen. Untersucht werden zunächst die Charakteristika des Verkehrsmengenrisikos mit seinen umfänglichen Einflussfaktoren. Anschließend werden die in der Praxis zur Anwendung kommenden Vertragsmodelle zur Bewirtschaftung von Straßeninfrastruktur dargestellt und analysiert, wie in den einzelnen Modellen Verkehrsmengenrisiko auf die verschiedenen Vertragspartner verteilt wird. Auf Basis dieser Grundlagen wird ein kriteriengestützter Analyserahmen entwickelt, der die Effizienz unterschiedlicher Risikoallokationen zwischen den Vertragspartner bewertet. Dabei werden einerseits die effizienzbeeinflussenden Eigenschaften der potentiellen Risikoträger eines PPP-Projektes berücksichtigt als auch die die effizienzbeeinflussenden Wirkungen der unterschiedlichen Vertragsmodelle. Aus den Erkenntnissen dieser Analyse werden letztlich Handlungs- und Gestaltungsempfehlungen zum Umgang mit dem Verkehrsmengenrisiko abgeleitet.

Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing.

Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.

In this note, we describe quite explicitly the Howe duality for Hodge systems and connect it with the well-known facts of harmonic analysis and Clifford analysis. In Section 2, we recall briefly the Fisher decomposition and the Howe duality for harmonic analysis. In Section 3, the well-known fact that Clifford analysis is a real refinement of harmonic analysis is illustrated by the Fisher decomposition and the Howe duality for the space of spinor-valued polynomials in the Euclidean space under the so-called L-action. On the other hand, for Clifford algebra valued polynomials, we can consider another action, called in Clifford analysis the H-action. In the last section, we recall the Fisher decomposition for the H-action obtained recently. As in Clifford analysis the prominent role plays the Dirac equation in this case the basic set of equations is formed by the Hodge system. Moreover, analysis of Hodge systems can be viewed even as a refinement of Clifford analysis. In this note, we describe the Howe duality for the H-action. In particular, in Proposition 1, we recognize the Howe dual partner of the orthogonal group O(m) in this case as the Lie superalgebra sl(2 1). Furthermore, Theorem 2 gives the corresponding multiplicity free decomposition with an explicit description of irreducible pieces.

Die Behandlung von geometrischen Singularitäten bei der Lösung von Randwertaufgaben der Elastostatik stellt erhöhte Anforderungen an die mathematische Modellierung des Randwertproblems und erfordert für eine effiziente Auswertung speziell angepasste Berechnungsverfahren. Diese Arbeit beschäftigt sich mit der systematischen Verallgemeinerung der Methode der komplexen Spannungsfunktionen auf den Raum, wobei der Schwerpunkt in erster Linie auf der Begründung des mathematischen Verfahrens unter besonderer Berücksichtigung der praktischen Anwendbarkeit liegt. Den theoretischen Rahmen hierfür bildet die Theorie quaternionenwertiger Funktionen. Dementsprechend wird die Klasse der monogenen Funktionen als Grundlage verwendet, um im ersten Teil der Arbeit ein räumliches Analogon zum Darstellungssatz von Goursat zu beweisen und verallgemeinerte Kolosov-Muskhelishvili Formeln zu konstruieren. Im Hinblick auf die vielfältigen Anwendungsbereiche der Methode beschäftigt sich der zweite Teil der Arbeit mit der lokalen und globalen Approximation von monogenen Funktionen. Hierzu werden vollständige Orthogonalsysteme monogener Kugelfunktionen konstruiert, infolge dessen neuartige Darstellungen der kanonischen Reihenentwicklungen (Taylor, Fourier, Laurent) definiert werden. In Analogie zu den komplexen Potenz- und Laurentreihen auf der Grundlage der holomorphen z-Potenzen werden durch diese monogenen Orthogonalreihen alle wesentlichen Eigenschaften bezüglich der hyperkomplexen Ableitung und der monogenen Stammfunktion verallgemeinert. Anhand repräsentativer Beispiele werden die qualitativen und numerischen Eigenschaften der entwickelten funktionentheoretischen Verfahren abschließend evaluiert. In diesem Kontext werden ferner einige weiterführende Anwendungsbereiche im Rahmen der räumlichen Funktionentheorie betrachtet, welche die speziellen Struktureigenschaften der monogenen Potenz- und Laurentreihenentwicklungen benötigen.

MULTI-SITE CONSTRUCTION PROJECT SCHEDULING CONSIDERING RESOURCE MOVING TIME IN DEVELOPING COUNTRIES
(2010)

Under the booming construction demands in developing countries, particularly in Vietnam situation, construction contractors often perform multiple concurrent projects in different places. In construction project scheduling processes, the existing scheduling methods often assume the resource moving time between activities/projects to be negligible. When multiple projects are deployed in different places and far from each other, this assumption has many shortcomings for properly modelling the real-world constraints. Especially, with respect to developing countries such as the Vietnam which contains transportation systems that are still in backward and low technical standards. This paper proposes a new algorithm named Multi-Site Construction Project Scheduling - MCOPS. The objective of this algorithm is to solve the problem of minimising multi-site construction project duration under limited available conditions of renewable resources (labour, machines and equipment) combining with the moving time of required resource among activities/projects. Additionally, in order to mitigate the impact of resource moving time into the multi-site project duration, this paper proposed a new priority rule: Minimum Resource Moving Time (MinRMT). The MinRMT is applied to rank the finished activities according to a priority order, to support the released resources to the scheduling activities. In order to investigate the impact of the resource moving time among activities during the scheduling process, computational experimentation was implemented. The results of the MCOPS-based computational experiments showed that, the resource moving time among projects has significantly impacted the multi-site project durations and this amount of time can not be ignored in the multi-site project scheduling process. Besides, the efficient application of the MinRMT is also demonstrated through the achieved results of the computational experiment in this paper. Though the efforts in this paper are based on the Vietnamese construction conditions, the proposed method can be usefully applied in other developing countries which have similar construction conditions.

In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.

A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)

Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.

In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.

Der Siedlungsbau in Hanoi kan heutzutage - über 20 Jahre nach dem Beginn der Renovierungspolitik udn der Markwirtschaft, die dem Städtebau eine große Gelegenheit zur Verbesserung gegeben haben - zurückblickend und eingeschätz werden. Die letzten 20 Jahre sind eine kurze Zeit in der tausendjährigen Geschichte der Stadt, trotzdem entwickelte sich die Stadt in diesem Zeitraum am schnellsten und auch am problematischten aus Sicht der Umwelt. Ohne eine passende Entwicklungsstategie oder eine geeignete Maßnahme bei der Stadtplanung vergrößert sich der Konflikt Ökonomie - Ökologie immer weiter. ... Die Findung eines neuen Wohnkonzeptes im Gleichgewicht zwischen Ökonomie und der Ökologie ist eine hochaktuelle Frage geworden.

Tests on Polymer Modified Cement Concrete (PCC) have shown significant large creep deformation. The reasons for that as well as additional material phenomena are explained in the following paper. Existing creep models developed for standard concrete are studied to determine the time-dependent deformations of PCC. These models are: model B3 by Bažant and Bajewa, the models according to Model Code 90 and ACI 209 as well as model GL2000 by Gardner and Lockman. The calculated creep strains are compared to existing experimental data of PCC and the differences are pointed out. Furthermore, an optimization of the model parameters is performed to fit the models to the experimental data to achieve a better model prognosis.

This cumulative dissertation investigates aspects of consumer decision making in hedonic contexts and its implications for the marketing of media goods through a series of three empirical studies. All three studies take place within a common theoretical framework of decision making models, applying parts of the framework in novel ways to solve real-world marketing research problems (study 1 and 2), and examining theoretical relationships between variables within of the framework (study 3). One notable way in which the studies differ is their theoretical treatment of the hedonic component of decision making, i.e. the role and conceptualization of emotions.

The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions.

The article presents analysis of stress distribution in the reinforced concrete support beam bracket which is a component of prefabricated reinforced concrete building. The building structure is spatial frame where dilatations were applied. The proper stiffness of its structure is provided by frames with stiff joints, monolithic lift shifts and staircases. The prefabricated slab floors are supported by beam shelves which are shaped as inverted letter ‘T’. Beams are supported by the column brackets. In order to lower the storey height and fulfill the architectural demands at the same time, the designer lowered the height of beam at the support zone. The analyzed case refers to the bracket zone where the slant crack. on the support beam bracket was observed. It could appear as a result of overcrossing of allowable tension stresses in reinforced concrete, in the bracket zone. It should be noted that the construction solution applied, i.e. concurrent support of the “undercut” beam on the column bracket causes local concentration of stresses in the undercut zone where the strongest transverse forces and tangent stresses occur concurrently. Some additional rectangular stresses being a result of placing the slab floors on the lower part of beam shelves sum up with those described above.

The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency.

PARAMETER IDENTIFICATION OF MESOSCALE MODELS FROM MACROSCOPIC TESTS USING BAYESIAN NEURAL NETWORKS
(2010)

In this paper, a parameter identification procedure using Bayesian neural networks is proposed. Based on a training set of numerical simulations, where the material parameters are simulated in a predefined range using Latin Hypercube sampling, a Bayesian neural network, which has been extended to describe the noise of multiple outputs using a full covariance matrix, is trained to approximate the inverse relation from the experiment (displacements, forces etc.) to the material parameters. The method offers not only the possibility to determine the parameters itself, but also the accuracy of the estimate and the correlation between these parameters. As a result, a set of experiments can be designed to calibrate a numerical model.

In nonlinear simulations the loading is, in general, applied in an incremental way. Path-following algorithms are used to trace the equilibrium path during the failure process. Standard displacement controlled solution strategies fail if snap-back phenomena occur. In this contribution, a path-following algorithm based on the dissipation of the inelastic energy is presented which allows for the simulation of snap-backs. Since the constraint is defined in terms of the internal energy, the algorithm is not restricted to continuum damage models. Furthermore, no a priori knowledge about the final damage distribution is required. The performance of the proposed algorithm is illustrated using nonlinear mesoscale simulations.

Digitale Lesezeichen, Volltextsuche und Multimedia-Inhalte – die Ende des 20. Jahrhunderts durch das Internet ausgelöste Medienrevolution ließ auch das Buch nicht unberührt. Die Verbreitung des World Wide Webs parallel zur rasanten Entwicklung der Computertechnologie ermöglichte die Digitalisierung des Buches und bildete das E-Book als neue Publikationsform heraus. Seit etwa zehn Jahren können Bücher nicht mehr nur gedruckt, sondern auch elektronisch zur Verfügung gestellt werden, was für die Buchbranche und den Leser einige Veränderungen bedeutet. Moderne Lesegeräte, auch E-Reader genannt, erlauben die Speicherung einer ganzen Bibliothek auf einem einzigen mobilen Endgerät. Dabei steht das einzelne E-Book dem gedruckten Buch in seiner Lesequalität in nichts nach und ermöglicht zudem das Einfügen elektronischer Notizen und Lesezeichen, die Volltextsuche nach bestimmten Wörtern und die Verbindung von Text mit Bild, Ton und Video. Dennoch kann das E-Book seit seinem Aufkommen in Deutschland noch keine Erfolgsgeschichte schreiben. Insbesondere hohe Preise für die Lesegeräte halten immer noch viele Leser vom Nutzen der E-Books ab. Zu sehr ist das gedruckte Buch für zahlreiche Menschen noch fester Bestandteil ihres alltäglichen Lebens, als das sie es bereits durch das E-Book austauschen würden. Eine Situation, die einige Fragen aufwirft: Wird sich das EBook als Medium durchsetzen und das gedruckte Buch langfristig ablösen? Kann das EBook neben Zeitung, Radio, Fernsehen und Buch überhaupt als ein neues Medium verstanden werden? Und welche Veränderungen würde die massenhafte Verbreitung elektronischer Bücher mit sich bringen?

Euclidean Clifford analysis is a higher dimensional function theory offering a refinement of classical harmonic analysis. The theory is centered around the concept of monogenic functions, i.e. null solutions of a first order vector valued rotation invariant differential operator called the Dirac operator, which factorizes the Laplacian. More recently, Hermitean Clifford analysis has emerged as a new and successful branch of Clifford analysis, offering yet a refinement of the Euclidean case; it focusses on the simultaneous null solutions, called Hermitean (or h-) monogenic functions, of two Hermitean Dirac operators which are invariant under the action of the unitary group. In Euclidean Clifford analysis, the Clifford-Cauchy integral formula has proven to be a corner stone of the function theory, as is the case for the traditional Cauchy formula for holomorphic functions in the complex plane. Previously, a Hermitean Clifford-Cauchy integral formula has been established by means of a matrix approach. This formula reduces to the traditional Martinelli-Bochner formula for holomorphic functions of several complex variables when taking functions with values in an appropriate part of complex spinor space. This means that the theory of Hermitean monogenic functions should encompass also other results of several variable complex analysis as special cases. At present we will elaborate further on the obtained results and refine them, considering fundamental solutions, Borel-Pompeiu representations and the Teoderescu inversion, each of them being developed at different levels, including the global level, handling vector variables, vector differential operators and the Clifford geometric product as well as the blade level were variables and differential operators act by means of the dot and wedge products. A rich world of results reveals itself, indeed including well-known formulae from the theory of several complex variables.

Several results concerning the distribution of the headway of busses in the flow behind a traffic signal are presented. In the main focus of interest is the description of analytical models, which are verified by the results of Monte-Carlo-Methods. The advantage of analytical models (verified, but not derived by simulation methods) is their flexibility with respect to possible generalizations. For instance, several random distributions of the flow incoming to the traffic signal can be compared. The attention will be directed at the question, how the primary headway H (analyzed in front of the traffic signal) is mapped to the headway H’ analyzed behind of the traffic signal and how the random distribution of H is mapped to that of H’. For the traffic flow in front of the traffic signal several models will be discussed. The first case considers the situation, that busses operate on a common lane with the individual motor car traffic and the traffic flow is saturated. In the second situation, busses operate on a separated bus lane. Moreover, a mixed situation is discussed to model as close to reality as possible.

The application of partly decoupled approach by means of continuum mechanics facilitates the calculation of structural responses due to welding. The numerical results demonstrate the ability of a qualitative prediction of welded connections. As it is intended to integrate the local effects of a joint in structural analysis of steel constructions, it is necessary to meet higher approaches towards quality. The wide array of material parameters are presented, which are affecting the thermal, metallurgical and mechanical behavior, and which have to be identified. For that purpose further investigations are necessary to analyze the sensitivity of the models towards different material properties. The experimental determination of every material parameter is not possible due to the extraordinary laborious efforts needed. Besides that, experimentally identified parameters can be applied only for the tested steel quality for measured temperature-time regimes. For that reason alternative approaches for identification of material parameters, such as optimization strategies, have to be applied. After a definition of material parameters a quantitative prediction of welded connections will also be possible. Numerical results show the effect of phase transformation, activated by welding process, on residual stress state. As these phenomena occur in local areas in the range of crystal and grain sizes, the description of microscopic phenomena and their propagation on a macroscopic level due to approaches of homogenization might be expedient. Nevertheless, one should bear in mind, the increasing number of material parameters as well as the complexity of their experimental determination. Thus the microscopic approach should always be investigated under the scope of ability and efficiency of a required prediction. Under certain circumstances a step backwards, adopting a phenomenological approach, also can be beneficial.

Steel structural design is an integral part of the building construction process. So far, various methods of design have been applied in practice to satisfy the design requirements. This paper attempts to acquire the Differential Evolution Algorithms in automatization of specific synthesis and rationalization of design process. The capacity of the Differential Evolution Algorithms to deal with continuous and/or discrete optimization of steel structures is also demonstrated. The goal of this study is to propose an optimal design of steel frame structures using built-up I-sections and/or a combination of standard hot-rolled profiles. All optimized steel frame structures in this paper generated optimization solutions better than the original solution designed by the manufacturer. Taking the criteria regarding the quality and efficiency of the practical design into consideration, the produced optimal design with the Differential Evolution Algorithms can completely replace conventional design because of its excellent performance.

CRITICAL STRESS ASSESSMENT IN ANGLE TO GUSSET PLATE BOLTED CONNECTION BY SIMPLIFIED FEM MODELLING
(2010)

Simplified modelling of friction grip bolted connections of steel member – to – gusset plate is often applied in engineering practise. The paper deals with the simplification of pre-tensioned bolt model and simplification of load transfer within connection. Influence on normal strain (and thus stress) distribution at critical cross-section is investigated. Laboratory testing of single-angle or double-angle members – to – gusset plates bolted connections were taken as basis for numerical analysis. FE models were created using 1D and 2D elements. Angles and gusset plates were modelled with shell elements. Two methods of modelling of friction grip bolting were considered: bolt-regarding approach with 1D element systems modelling bolts and two variants of bolt-disregarding approach with special constraints over some part of member and gusset plate surfaces in contact: a) constraints over whole area of contact, b) constraints over the area around each bolt shank (“partially tied”). Modelling of friction grip bolted connections using simplified bolt modelling may be effective, especially in the case of analysis concerning elastic range only. In such a case disregarding bolts and replacing them with “partially tied” modelling seems to be more attractive. It is less time-consuming and provides results of similar accuracy in comparison to analysis utilizing simplified bolt modelling.

This thesis focuses on the cryptanalysis and the design of block ciphers and hash func- tions. The thesis starts with an overview of methods for cryptanalysis of block ciphers which are based on differential cryptanalysis. We explain these concepts and also sev- eral combinations of these attacks. We propose new attacks on reduced versions of ARIA and AES. Furthermore, we analyze the strength of the internal block ciphers of hash functions. We propose the first attacks that break the internal block ciphers of Tiger, HAS-160, and a reduced round version of SHACAL-2. The last part of the thesis is concerned with the analysis and the design of cryptographic hash functions. We adopt a block cipher attack called slide attack into the scenario of hash function cryptanalysis. We then use this new method to attack different variants of GRINDAHL and RADIOGATUN. Finally, we propose a new hash function called TWISTER which was designed and pro- posed for the SHA-3 competition. TWISTER was accepted for round one of this com- petition. Our approach follows a new strategy to design a cryptographic hash function. We also describe several attacks on TWISTER and discuss the security issues concern- ing these attack on TWISTER.

In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.

The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate.

Within the scheduling of construction projects, different, partly conflicting objectives have to be considered. The specification of an efficient construction schedule is a challenging task, which leads to a NP-hard multi-criteria optimization problem. In the past decades, so-called metaheuristics have been developed for scheduling problems to find near-optimal solutions in reasonable time. This paper presents a Simulated Annealing concept to determine near-optimal construction schedules. Simulated Annealing is a well-known metaheuristic optimization approach for solving complex combinatorial problems. To enable dealing with several optimization objectives the Pareto optimization concept is applied. Thus, the optimization result is a set of Pareto-optimal schedules, which can be analyzed for selecting exactly one practicable and reasonable schedule. A flexible constraint-based simulation approach is used to generate possible neighboring solutions very quickly during the optimization process. The essential aspects of the developed Pareto Simulated Annealing concept are presented in detail.

Nonlinear analyses are characterised by approximations of the fundamental equations in different quality. Starting with a general description of nonlinear finite element formulation the fundamental equations are derived for plane truss elements. Special emphasis is placed on the determination of internal and external system energy as well as influence of different quality approaches for the displacement-strain relationship on solution quality. To simplify the solution procedure the nonlinear function describing the kinematics is expanded into a Taylor series and truncated after the n-th series term. The different kinematics influence speed of convergence as well as exactness of solution. On a simple truss structure this influence is shown. To assess the quality of different formulations concerning the nonlinear kinematic equation three approaches are discussed. First the overall internal and external energy is compared for different kinematical models. In a second step the energy content related to single terms describing displacement-strain relationship is investigated and used for quality control following two different paths. Based on single ε-terms an adaptive scheme is used to change the kinematical model depending on increasing nonlinearity of the structure. The solution quality has turned out satisfactory compared to the exact result. More detailed investigations are necessary to find criteria for the threshold values for the iterative process as well as for decision on number and step size of incremental load steps.

The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.

MICROPLANE MODEL WITH INITIAL AND DAMAGE-INDUCED ANISOTROPY APPLIED TO TEXTILE-REINFORCED CONCRETE
(2010)

The presented material model reproduces the anisotropic characteristics of textile reinforced concrete in a smeared manner. This includes both the initial anisotropy introduced by the textile reinforcement, as well as the anisotropic damage evolution reflecting fine patterns of crack bridges. The model is based on the microplane approach. The direction-dependent representation of the material structure into oriented microplanes provides a flexible way to introduce the initial anisotropy. The microplanes oriented in a yarn direction are associated with modified damage laws that reflect the tension-stiffening effect due to the multiple cracking of the matrix along the yarn.

We give a sufficient and a necessary condition for an analytic function "f" on the unit disk "D" with Hadamard gap to belong to a class of weighted logarithmic Bloch space as well as to the corresponding little weighted logarithmic Bloch space under some conditions posed on the defined weight function. Also, we study the relations between the class of weighted logarithmic Bloch functions and some other classes of analytic functions by the help of analytic functions in the Hadamard gap class.

Am 25. März 2010 veranstaltete die Professur Baubetrieb und Bauverfahren im Rahmen der jährlich stattfindenden baubetrieblichen Tagungsreihe gemeinsam mit der Arbeitsgruppe „Unikatprozesse“ in der Fachgruppe „Simulation in Produktion und Logistik“ (SPL) im Rahmen der Arbeitsgemeinschaft Simulation – ASIM einen ganztägigen Workshop mit dem Titel: „Modellierung von Prozessen zur Fertigung von Unikaten“. Viele Bauprozesse sind dadurch gekennzeichnet, dass sie Unikatcharakter besitzen. Unikate sind durch prototypische Einmaligkeit, Individualität, vielfältige Randbedingungen, einen geringen Grad an Standardisierung und Wiederholungen gekennzeichnet. Das erschwert die realitätsnahe Modellierung zur Simulation sogenannter Unikatprozesse. Dieser Besonderheit widmet sich die überwiegende Zahl der Tagungsbeiträge, die in diesem Band widergegeben sind.

Building information modeling offers a huge potential for increasing the productivity and quality of construction planning processes. Despite its promising concept, this approach has not found widespread use. One of the reasons is the insufficient coupling of the structural models with the general building model. Instead, structural engineers usually set up a structural model that is independent from the building model and consists of mechanical models of reduced dimension. An automatic model generation, which would be valuable in case of model revisions is therefore not possible. This can be overcome by a volumetric formulation of the problem. A recent approach employed the p-version of the finite element method to this problem. This method, in conjunction with a volumetric formulation is suited to simulate the structural behaviour of both „thick“ solid bodies and thin-walled structures. However, there remains a notable discretization error in the numerical models. This paper therefore proposes a new approach for overcoming this situation. It sugggests the combination of the Isogeometric analysis together with the volumetric models in order to integrate the structural design into the digital, building model-centered planning process and reduce the discretization error. The concept of the isogeometric analysis consists, roughly, in the application of NURBS functions to represent the geometry and the shape functions of the elements. These functions possess some beneficial properties regarding numerical simulation. Their use, however, leads to some intricacies related to the setup of the stiffness matrix. This paper describes some of these properties.

In this paper we present an inverse method which is capable of identifying system components in a hydro-mechanically coupled system, i.e. for fluid flow in porous media. As an example we regard water dams that were constructed more than hundred years ago but which are still in use. Over the time ageing processes have changed the condition of these dams. Within the dams fissures might have grown. The proposed method is designed to locate these fissures out of combined mechanical and hydraulic measurements. In a numerical example the fissures or damaged zones are described by a smeared crack model. The task is now to identify simultaneously the spatial distribution of Young’s modulus and the hydraulic permeability due to the fact, that in regions where damages are present, the mechanical stiffness of the system is reduced and the permeability increased. The inversion is shown to be an ill-posed problem. As a consequence regularizing methods have to be applied, where the nonlinear Landweber method (a gradient type method combined with a discrepancy principle) has proven to be an efficient choice.

Planning and construction processes are characterized by the peculiarity that they need to be designed individually for each project. It is necessary to set up an individual schedule for each project. As a basis for a new project, schedules from already finished projects are used, but adaptions are always necessary. In practice, scheduling tools only document a process. Schedules cover a set of activities, their duration and a set of interdependencies between activities. The design of a process is up to the user. It is not necessary to specify each interdependency, and completeness and correctness need to be checked manually. No methodologies are available to guarantee properties such as correctness or completeness. The considerations presented in the paper are based on an approach where a planning and a construction process including the interdependencies between planning and construction activities are regarded as a result. Selected information need to be specified by a user, and a proposal for an order of planning and construction activities is computed. As a consequence, process properties such as correctness and completeness can be guaranteed with respect to user input. Especially in Germany, clients are allowed to modify their requirements at any time. This leads to modifications in the planning and construction processes. This paper covers a mathematical formulation for this problem based on set theory. A complex structure is set up covering objects and relations; and operations are defined that guarantee consistency in the underlying and versioned process description. The presented considerations are based on previous work. This paper can be regarded as the next step in a series of previous work describing how a suitable concept for handling, planning and construction processes in civil engineering can be formed.

The uncertainty existing in the construction industry is bigger than in other industries. Consequently, most construction projects do not go totally as planned. The project management plan needs therefore to be adapted repeatedly within the project lifecycle to suit the actual project conditions. Generally, the risks of change in the project management plan are difficult to be identified in advance, especially if these risks are caused by unexpected events such as human errors or changes in the client preferences. The knowledge acquired from different resources is essential to identify the probable deviations as well as to find proper solutions to the faced change risks. Hence, it is necessary to have a knowledge base that contains known solutions for the common exceptional cases that may cause changes in each construction domain. The ongoing research work presented in this paper uses the process modeling technique of Event-driven Process Chains to describe different patterns of structure changes in the schedule networks. This results in several so called “change templates”. Under each template different types of change risk/ response pairs can be categorized and stored in a knowledge base. This knowledge base is described as an ontology model populated with reference construction process data. The implementation of the developed approach can be seen as an iterative scheduling cycle that will be repeated within the project lifecycle as new change risks surface. This can help to check the availability of ready solutions in the knowledge base for the situation at hand. Moreover, if the solution is adopted, CPSP, “Change Project Schedule Plan „a prototype developed for the purpose of this research work, will be used to make the needed structure changes of the schedule network automatically based on the change template. What-If scenarios can be implemented using the CPSP prototype in the planning phase to study the effect of specific situations without endangering the success of the project objectives. Hence, better designed and more maintainable project schedules can be achieved.

Buildings can be divided into various types and described by a huge number of parameters. Within the life cycle of a building, especially during the design and construction phases, a lot of engineers with different points of view, proprietary applications and data formats are involved. The collaboration of all participating engineers is characterised by a high amount of communication. Due to these aspects, a homogeneous building model for all engineers is not feasible. The status quo of civil engineering is the segmentation of the complete model into partial models. Currently, the interdependencies of these partial models are not in the focus of available engineering solutions. This paper addresses the problem of coupling partial models in civil engineering. According to the state-of-the-art, applications and partial models are formulated by the object-oriented method. Although this method solves basic communication problems like subclass coupling directly it was found that many relevant coupling problems remain to be solved. Therefore, it is necessary to analyse and classify the relevant coupling types in building modelling. Coupling in computer science refers to the relationship between modules and their mutual interaction and can be divided into different coupling types. The coupling types differ on the degree by which the coupled modules rely upon each other. This is exemplified by a general reference example from civil engineering. A uniform formulation of coupling patterns is described analogously to design patterns, which are a common methodology in software engineering. Design patterns are templates for describing a general reusable solution to a commonly occurring problem. A template is independent of the programming language and the operating system. These coupling patterns are selected according to the specific problems of building modelling. A specific meta-model for coupling problems in civil engineering is introduced. In our meta-model the coupling patterns are a semantic description of a specific coupling design.

By the use of numerical methods and the rapid development of computer technology in the recent years, a large variety, complexity, refinement and capability of partial models have been achieved. This can be noticed in the evaluation of the reliability of structures, e.g. the increased use of spatial structural systems. For the different fields of civil engineering, well developed partial models already exist. Because these partial models are most often used separately, the general view is not entirely illustrated. Until now, there has been no common methodology for evaluating the efficiency of models; the trust in the prediction of a special engineering model has generally relied on the engineer’s experience. In this paper the basics of evaluation of simple models and coupled partial models of frame structures will be discussed using sustainable numerical methods. Furthermore, quality classes (levels) of design tasks will be defined based on their practical relevance. In addition, analysis methods will be systemized. After analysis of different published assessment methods, it may be noted, that the Efficiency Indicator Method (EWM) is most suitable for the observed evaluation problem. Therefore, the EWM was modified to the Model Efficiency Analysis (MEA) for the purpose of a holistic evaluation. The criteria are characterized by two groups, benefit and expenditure, and it is possible by calculating the quotient (benefit/expenditure) to make a statement about the efficiency of the observed models. Presently, the expenditure value is not a subject of investigation, and so the model efficiency is calculated only by the benefit value. This paper also contains the associated criteria catalog, different normalization methods, as well as weighting possibilities.

Die Planung von komplexen Bauwerken erfolgt zunehmend mit Planungswerkzeugen, die den Export von Bauwerksinformationen im STEP-Format auf Grundlage der IFC (Industry Foundation Classes) erlauben. Durch die Verfügbarkeit dieser Schnittstelle ist es möglich, Bauwerksinformationen für die weiterführende Verarbeitung zu verwenden. Zur Visualisierung der geometrischen Daten stehen innerhalb der IFC verschiedene geometrische Modelle für die Darstellung von Bauteilen zur Verfügung. Unter anderem werden für das „Ausschneiden“ von Öffnungen aus Bauteilen (z.B. für Fenster und Türen) geometrische boolesche Operationen benötigt.
Gegenstand des Beitrags ist die Vorstellung eines Algorithmus zur Berechnung von booleschen Operationen auf Basis eines triangulierten B-Rep (Boundary Representation) Modells nach HUBBARD (1990). Da innerhalb von IFC-Gebäudemodellen Bauteile oft das Resultat mehrerer boolescher Operationen sind (z.B. um mehrere Fensteröffnungen von einer gegebenen Wand abzuziehen), wurde der Algorithmus von Hubbard angepasst, sodass mehrere boolesche Operationen gleichzeitig berechnet werden können. Durch diese Optimierung wird eine deutliche Reduzierung der benötigten Berechnungen und somit der Rechenzeit erreicht.

One of the main focuses of recent Chinese urban development is the creation and retrofitting of public spaces driven by the market force and demand. However, researches concerning human and cultural influences on shaping public spaces have been scanty. There still exist many undefined ambiguous planning aspects institutionally and legislatively. This is an explanatory research to address interactions, incorporations and interrelationship between the lived environment and its peoples. It is knowledge-seeking and normative. Theoretically, public space in a Chinese context is conceptualized; empirically, a selected case is inquired. The research has unfolded a comparatively complete understanding of China’s planning evolution and on-going practices. Data collection emphasizes the concept of ‘people’ and ‘space’. First-hand data is derived from the intensive fieldwork and observatory and participatory documentations. The ample detailed authentic empirical data empowers space syntax as a strong analysis tool in decoding how human’s activities influence the public space. Findings fall into two categories but interdependent. Firstly, it discloses the studied settlement as a generic, organic and incremental development model. Its growth and established environment is evolutionary and incremental, based on its intrinsic traditions, life values and available resources. As a self-sustaining settlement, it highlights certain vernacular traits of spatial development out of lifestyles and cultural practices. Its spatial articulation appears as a process parallel to socio-economic transitions. Secondly, crucial planning aspects are theoretically summarized to address the existing gap between current planning methodology and practicalities. It pinpoints several most significant and particular issues, namely, disintegrated land use system and urban planning; missing of urban design in the planning system, loss of a human-responsive environment resulted from standardized planning and under-estimation of heritage in urban development. The research challenges present Chinese planning laws and regulations through urban public space study; and pinpoints to yield certain growth leverage for planning and development. Thus, planning is able to empower inhabitants to make decisions along the process of shaping and sustaining their space. Therefore, it discusses not only legislative issues, concerning land use planning, urban design and heritage conservation. It leads to a pivotal proposal, i.e., the integration of human and their social spaces in formulating a new spatial strategy. It expects to inform policymakers of underpinning social values and cultural practices in reconfiguring postmodern Chinese spatiality. It propounds that social context endemic to communities shall be integrated as a crucial tool in spatial strategy design, hence to strengthen spatial attributes and improve life quality.

Complex buildings and other structures are cumulatively planned with software that supports the export of building information in the STEP-format on the basis of the IFC (Industry Foundation Classes). Because of the availability of this interface, it is possible to use the data of a building for further processing.
Within the IFC, several geometrical models for the visualization of building elements are provided. Among others, geometric Boolean set operations are needed to "subtract" openings from building elements (e.g. for windows or doors) - CSG (Constructive Solid Geometry).
Therefore, software components based on the algorithms [Laidlaw86] and [Hubbard90] were developed at the professorship Informatik im Bauwesen that support these functionalities on the basis of Java3D. However, it turned out in praxis, that these components are numerically instable and that there is no acceptable robustness or tolerance of errors. This is caused by mistakes in the implementation (bugs) as well as the insufficient handling of numerical inaccuracies. Further, a verification and, where applicable, a correction of qualitative substandard initial data is missing.
Prior to this student research project, the implementation of a self-contained application for a visual error control was initiated. This tool visualizes several program steps and their corresponding data. With use of this tool, the implemented algorithms can be analyzed in detail.
The papers [Laidlaw86] and [Hubbard90] are unsatisfactory describing some essential steps of the algorithm as well as implementation details to execute Boolean set operations on the basis of a B-rep (Boundary Representation) model. Hence, the algorithm should be documented comprehensible with the help of figures and pseudo code. Moreover, problems within the existing implementation shall be identified and possible solution strategies shall be provided.

We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.

In this paper the influence of changes in the mean wind velocity, the wind profile power-law coefficient, the drag coefficient of the terrain and the structural stiffness are investigated on different complex structural models. This paper gives a short introduction to wind profile models and to the approach by Davenport A. G. to compute the structural reaction of wind induced vibrations. Firstly with help of a simple example (a skyscraper) this approach is shown. Using this simple example gives the reader the possibility to study the variance differences when changing one of the above mentioned parameters on this very easy example and see the influence of different complex structural models on the result. Furthermore an approach for estimation of the needed discretization level is given. With the help of this knowledge the structural model design methodology can be base on deeper understanding of the different behavior of the single models.

There are many different approaches to simulate the mechanical behavior of RC−Frames with masonry infills. In this paper, selected modeling techniques for masonry infills and reinforced concrete frame members will be discussed − stressing the attention on the damaging effects of the individual members and the entire system under quasi−static horizontal loading. The effect of the infill walls on the surrounding frame members is studied using equivalent strut elements. The implemented model consider in−plane failure modes for the infills, such as bed joint sliding and corner crushing. These frame member models differ with respect to their stress state. Finally, examples are provided and compared with experimental data from a real size test executed on a three story RC−Frame with and without infills. The quality of the model is evaluated on the basis of load−displacement relationships as well as damage progression.

ESTIMATING UNCERTAINTIES FROM INACCURATE MEASUREMENT DATA USING MAXIMUM ENTROPY DISTRIBUTIONS
(2010)

Modern engineering design often considers uncertainties in geometrical and material parameters and in the loading conditions. Based on initial assumptions on the stochastic properties as mean values, standard deviations and the distribution functions of these uncertain parameters a probabilistic analysis is carried out. In many application fields probabilities of the exceedance of failure criteria are computed. The out-coming failure probability is strongly dependent on the initial assumptions on the random variable properties. Measurements are always more or less inaccurate data due to varying environmental conditions during the measurement procedure. Furthermore the estimation of stochastic properties from a limited number of realisation also causes uncertainties in these quantities. Thus the assumption of exactly known stochastic properties by neglecting these uncertainties may not lead to very useful probabilistic measures in a design process. In this paper we assume the stochastic properties of a random variable as uncertain quantities caused by so-called epistemic uncertainties. Instead of predefined distribution types we use the maximum entropy distribution which enables the description of a wide range of distribution functions based on the first four stochastic moments. These moments are taken again as random variables to model the epistemic scatter in the stochastic assumptions. The main point of this paper is the discussion on the estimation of these uncertain stochastic properties based on inaccurate measurements. We investigate the bootstrap algorithm for its applicability to quantify the uncertainties in the stochastic properties considering imprecise measurement data. Based on the obtained estimates we apply standard stochastic analysis on a simple example to demonstrate the difference and the necessity of the proposed approach.

The evident advances of the computational power of the digital computers enable the modeling of the total system of structures. Such modeling demands compatible representations of the couplings of different structural subsystems. Therefore, models of dynamic interaction between the vehicle and the bridge and models of a bridge bearing, a coupling element between the bridge's superstructure and substructure, are of interest and discussed within this paper. The vehicle-bridge interaction may be described as a function connecting two sets of behavior. In this case, the coupling is embodied by mutual parameters that affect both systems, such as the frequency content of the bridge and the vehicle. Whereas the bridge bearings are elements used specifically to couple, in such elements the deformation and the transferred loads are used in characterizing the coupling The nature of these couplings and their influence on the bridge response is different. However, the need to assess the amount of dynamic response transferred by or within these couplings is a common argument.

The changed global security situation in the last eight years has shown the importance of emergency management plans in public buildings. Therefore, the use of computer simulators for surveying fire safety design and evacuation process is increasing. The aim of these simulators is to have more realistic evacuation simulations. The challenge is, firstly, to realize the virtual simulation environment based on geometrical and material boundary conditions, secondly, to considerate the mutual interaction effects between different parameters and, finally, to have a realistic visualization of the simulated results. In order to carry out this task, an especial new software method on a BIM-platform has to be developed which can integrate all required simulations and will be able to have an immersive output BIM ISEE (Immersive Safety Engineering Environment). The new BIM-ISEE will integrate the Fire Dynamics Simulator (FDS) for fire and evacuation simulation in the Autodesk Revit which is a BIM-platform and will represent the simulation results in the immersive virtual environment at the institute (CES-Lab). With BIM-ISEE the fire safety engineer will be able to obtain more realistic visualizations in the immersive environment, to modify his concept more effectively, to evaluate the simulation results more accurately and to visualize the various simulation results. It can also give the rescue staff the opportunity to perform and evaluate emergency evacuation trainings.