Article Page

DOI: 10.31038/NRFSJ.2018112

Abstract

Traditional randomized controlled clinical trial designs pose a number of particular challenges for nutrition research. For instance, small effect size and large variability of the response is very common and there is often a limited amount of early development data to inform the design of confirmatory trials. Individual nutrients often interact with different physiological processes and they require the interaction with other nutrients to act in the most beneficial way. This makes it difficult to delineate physiological beneficial effects of nutritional ingredients or products. In the past decades, the use of adaptive clinical trial designs has emerged as a promising methodology for early identification of signals related to clinical benefits of an intervention or for optimizing trials’ chances of success. We believe that adaptive designs can, as it has been the case for drug development, help in improving significantly some aspects of the nutrition research. In this article, the application of adaptive design methods in nutritional clinical trials is discussed in terms of benefits, challenges and recommendations. This article aims to be a practical and comprehensive review on the topic, to raise awareness and stimulate an adaptive design mindset toward investigators, scientific community and trial statisticians in the field of nutrition.

Keywords

Clinical research in nutrition; Randomized clinical trial; Adaptive design; Flexible trials; Nutrition specificities; Interim analysis; Group sequential design; Sample Size Re-assessment; Seamless trials; Simulation guided clinical trials

1. Introduction

In clinical research and development, adequate well-controlled clinical trials using valid study designs are essential for demonstrating causal relationship with the experimental intervention and obtaining substantial evidence on the safety and efficacy [1]. The clinical trial process is lengthy and costly, but necessary to ensure a fair and reliable assessment of the intervention under investigation.

     In the early 2000s, it was recognized that increasing investment in biomedical research has not resulted in a proportional increased success rate of pharmaceutical/clinical development. The United Sates (US) Food and Drug Administration (FDA) kicked off the Critical Path Initiative to identify possible causes and provide solutions. In 2006, the FDA released a Critical Path Opportunities List that outlines six broad topic areas to bridge the gap between the quick pace of new biomedical discoveries and the slower pace at which those discoveries are currently developed into therapies. Among these broad topic areas, the FDA called for advancing innovative trial designs and especially for the use of prior experience or accumulated information in trial design. Many researchers interpreted this as an encouragement by the FDA of the use of adaptive design methods or Bayesian approaches in pharmaceutical/clinical development. As a result, the FDA published a first draft guidance on Adaptive Clinical Trial Designs in 2010. The use of adaptive trial designs in clinical research and development has become very popular since then. Adaptive designs provide the opportunity to modify certain aspects of the trial design whilst the study is still ongoing, without violating the quality and the integrity of the data. The use of adaptive design in clinical trials is attractive because of its flexibility and efficiency for early identifying signals of clinical benefit of an intervention. They help increase the probability of success of the clinical investigation, better reflect real clinical practice, and offer ethical advantages in making early decisions with respect to both efficacy and safety of the intervention.

Nowadays, nutritional interventions are developed to demonstrate the maintenance/improvement of physical and mental well-being or the prevention of nutrition-related diseases. Research in the field of nutrition has grown substantially in the past decade. Technological breakthroughs and research discoveries have greatly increased the scope of targeted health benefits and has even attracted pharmaceutical companies, especially in the area of functional food. Today nutrition research is facing challenges similar to pharmaceutical research, with respect to cost and complexity of clinical trials, but nutritional clinical trials have also very specific challenges and obstacles to overcome. Nutrition-based interventions can lead to significant public health advances, but require an approach that takes into account the specificities of nutritional research. We believe that adaptive designs can, as it has been the case for drug development, help in improving significantly some aspects of the nutrition research.

In this article, the application of adaptive design methods in nutritional clinical trials is discussed in terms of benefits, challenges and recommendations. This article aims to be a practical and comprehensive review on the topic, to raise awareness and stimulate an adaptive design mindset toward investigators, scientific community and trial statisticians in the field of nutrition. Concrete guidelines and key literature references are given as a first base to adaptive approach, in order to maximize success of implementation and prevent inappropriate use of the methodology for those who are not familiar with the topic. First, a general overview of adaptive clinical trial designs is presented. The following section outlines then the challenges in nutrition clinical trials and, for some aspects, discuss the opportunity of applying adaptive design methodology. Some key considerations, consolidated from literature review and authors’ experience, for the implementation of flexible designs in the nutritional field are detailed in last section.

2. What is Adaptive Clinical Trial Design?

In conventional trial designs, the study progresses in a lock-step fashion. Once the objective(s) of the trial is (are) clear, key decisions need to be made to set up the trial design: choose relevant outcomes, set a hierarchy of parameters in line with the trial strategy, define the target population, select an appropriate dose regimen, decide on the hypothesis used for sample size, plan upfront the statistical analysis with appropriate missing data and multiple testing strategy etc. These discussions on critical aspects of the trial are made according to the available information during the trial preparation phase. After incorporating all the decisions in the trial protocol, the study moves into the execution phase. Once trial execution is completed, data are locked for the next phase: analyses and sharing of results.

     Some would argue that this is how we always did and should do clinical trials. To be provocative, authors argue that this is somehow like driving a car with the eyes closed. You plan your trip and you decide to go from point A to B (design thinking), you draw precisely your itinerary on a map (protocol writing) without forgetting to fuel your car (sample size). But then once on the road (trial execution), you will drive from A to B the eyes half-closed. Game is set. Few driving adaptations are sometimes made (protocol amendment) but you follow mainly your map, not really what’s happening on the road. And major events can be ignored, because you do not see them. Adaptive trial design suggests, if well anticipated during the planning of your trip, to open the eyes and adapt your driving.

The idea of adaptive design methods in clinical trials is to allow certain flexibility for identifying any signal, pattern/trend, and preferably best (optimal) clinical benefits of an intervention during the conduct of the clinical trial (after the review of accumulated data available). Appropriate flexibility allows the modification of the study design as the clinical trial continues, for achieving the study objectives accurately and reliably with a higher probability of success and in a more efficient way compared to traditional fix design.

2.1  Definition of Adaptive Design

An adaptive design can be defined as a design that allows adaptations to trial and/or statistical procedures of the trial after its initiation without undermining the validity and integrity of the trial [2]. An adaptation is referred to as a modification or a change made to trial and/or statistical procedure before, during, and after the conduct of a clinical trial. By definition, adaptations that are commonly employed in clinical trials can be classified into the categories of prospective (or by design) adaptations, concurrent (or ad hoc) adaptations, and retrospective adaptations. Thus, alternatively, with the emphasis on the feature of by design adaptations only (rather than ad hoc adaptations), the US Pharmaceutical Research Manufacturer Association (PhRMA) Working Group on Adaptive Design refers to an adaptive design as a clinical trial design that uses accumulating data to decide on how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial [3]. An adaptive design is also considered as a flexible design [4, 5]. In February 2010, the US FDA circulated a draft guidance on adaptive design clinical trials. The FDA draft guidance defines an adaptive design as a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of (usually interim) data from subjects in the study [1]. The term prospective is emphasized to refer to the adaptations planned before data were examined in an unblinded manner. Analyses of the accumulating study data are performed at prospectively planned time points within the study, with or without formal statistical hypothesis testing. In practice, the design does not change, as flexibility is part of the design.

2.2 Types of Adaptive Design

Depending upon the types of adaptations employed, adaptive designs can be classified into the following types: 1- adaptive randomization design, 2- group sequential design, 3- sample size re-estimation design, 4- drop-the-losers (or pick- the-winner) design, 5- adaptive dose finding design, 6- biomarker-adaptive design, 7- adaptive treatment-switching design, 8- hypothesis-adaptive design, 9- adaptive seamless trial design (e.g., a phase I/II design in early phase of clinical development or a phase II/III design in late phase of clinical development), 10- multiple adaptive design. Detailed information regarding the classification and these adaptive designs with their commonly considered advantaged and limitation can be found in [6, 7]. To explain briefly a few: a group sequential design allows for prematurely stopping a trial due to safety, futility and/or efficacy. A sample size re-estimation design allows for sample size adjustment or re-estimation. A drop-the-losers design is a design that allows dropping the inferior intervention group. Adaptive seamless design combines objectives traditionally addressed in 2 or more separate trials into a single trial. An adaptive-hypotheses design allows modifications or changes in hypotheses. Group sequential design, adaptive dose finding design, and adaptive seamless design (also known as two-stage adaptive seamless design) are probably the most commonly employed adaptive trial designs in clinical research and development. Sample size reassessment has also received great attention during the last decade. Most recently, biomarker-adaptive (or biomarker-driven) trial design has become very popular for clinical research and development of precision medicine.

In the 2010 FDA draft guidance, adaptive designs are classified into two categories: well-understood designs and less well-understood designs. Well-understood design refers mainly to the study designs with planned modifications based on an interim analysis that either need no statistical correction or for which the statistical methods for data analysis (i.e. properly accounting for the analysis-related multiplicity issues) are well established. Such adaptive design methods include the classical group sequential design with the adaptations of stopping the trial early due to safety, futility and/or efficacy and approaches using overall/blinded outcome data, baseline data, or efficacy unrelated outcome data. Moreover, they have been employed in clinical research for years and the regulatory agencies have built sufficient experience to evaluate this class of adaptive designs. Less well-understood designs, on the other hand, are the study designs whose statistical properties are not yet established and/or fully understood. Less well-understood adaptive design methods usually involve unblinded interim analyses to estimate the intervention effect(s). For example: unblinded interim analyses may include adaptive randomization based on relative intervention group responses, sample size re-estimation based on effect size estimates at interim, and modification of the patient population based on treatment-effect estimates. Most importantly, the regulatory agency has limited experience in evaluating the validity and integrity of these adaptive design approaches.

     Nowadays the classification made in the 2010 guidance issued by Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and research (CBER) of the FDA is commonly accepted, used and discussed in the literature. However, the draft guidance issued in 2015 by CBER and the Center for Devices and Radiological Health (CDRH) do not use this classification [8]. With a growing literature and FDA exposure with regard to less well-understood design, some complex adaptation such as unblinded sample size re-estimation are becoming better understood and already showed record of positive regulatory acceptation [9].

2.3  Benefits, limitations and requirements: General considerations

Possible benefits for the use of adaptive design methods in clinical trials include 1- the correction of wrong assumptions made at the beginning of the trial such as power calculation for sample size, 2- the selection of the most promising option early with limited number of subjects available at interim, 3- the use of emerging external information (e.g. recent publications regarding safety and tolerability of the intended dose regimen) and 4- the opportunity to react earlier to a surprise that is either positive (e.g., strong efficacy or clinical benefits) or negative (e.g., safety concern or futility), to shorten the development time and consequently speed up development process of the test intervention [10]. The use of adaptive design methods provides a second chance to modify or re-evaluate the trial after reviewing data from the trial itself at interim look. It allows the integration of knowledge gained from within the trial and enables at the earliest time point an appropriate decision making. It increases the information value generated per resource unit invested. Introducing planned flexibility can make a trial more efficient but also more ethical [11–13].

Compared to a fix design implementation, an adaptive design requires additional efforts during the planning, implementation, trial execution, analysis and valorization of results. It requires careful planning with input from a cross functional team. During the implementation and execution, it requires specific operational considerations compared to a traditional design. A close collaboration and coordination is needed between different functions during the course of the study, especially at the time of the interim analysis.

     Underlying theoretical complexity of adaptive design implies a solid statistical foundation and a careful interpretation of the results. Overall, major adaptations or modifications to a trial, could 1- introduce operational bias/variation to data collection, 2- result in a shift in the target population in terms of either location or scale parameter, and/or 3- lead to inconsistency between hypothesis to be tested and the corresponding statistical test. Consequently, one may not be able to answer to the medical/scientific questions that the original trial intended to answer. In addition, complex adaptive designs require a strong statistical expertise, an adequate infrastructure and /or external partnerships to guaranty the validity and integrity of the trial.

     The use of adaptive trial designs must not undermine the validity and integrity of the intended trial. Integrity refers to maintaining consistency and confidentiality of data during the conduct of the trial, independently of trial duration or number of adaptations [2, 3]. Validity refers to the minimization of biases that may be introduced after adaptations made to the trial, ensuring reproducibility, accuracy and precision of results coupled with inference that correctly accounts for all adaptations. Adaptations based on blinded analyses at the interim can largely reduce or completely avoid potential bias, and potential difficulties can be addressed prospectively if sufficient time is allocated during the planning phase of the trial.

2.4  Regulatory Perspectives

Since the release of the FDA draft guidance on adaptive design clinical trials [1], the use of the adaptive design methods in clinical trials is moving in the right direction. Yet, there is still a long way to go until all of the scientific issues from clinical, statistical, and regulatory perspectives are addressed properly. According to the draft guidance, the sponsors are encouraged to gain experience through the implementation of adaptive design methods in early phase trials and/or exploratory studies. For confirmatory trials involving a less well-understood adaptive design, the communication between sponsors and the regulatory agencies during the planning stage is recommended. Thus, it has been suggested that the escalating momentum for the use of adaptive design methods in clinical trials should proceed with caution [10]. Meanwhile, valid statistical methods for less well-understood adaptive designs with various adaptations should be developed to prevent the possible misuse and/or abuse of the adaptive design methods in clinical trials.

3. Nutrition Clinical Trials: opportunities and challenges for adaptive designs

Nutrition interventions mainly focus on maintaining health by reducing risk factors that predispose to the development of a disease. When designing a trial, maintaining health (i.e. reaching homeostasis of a physiological system at an optimal “healthy” level) as opposed to treating a disease (i.e. correcting a physiological process that is dysfunctional) demands from clinical nutrition scientists an approach that is different from a drug development way of thinking [14,15]. For example, maintaining normal blood pressure by reducing a risk factor will need a different test hypothesis with adapted endpoints and statistical analysis as compared to treating patients with diagnosed hypertension. Population selection and sub-group identification also require special attention. The difference between health and disease can be seen as a continuum [15]. It is often not easy to select a clear-cut and appropriate spectrum of the study population for which the desired health benefit of a nutrition intervention will be measurable and relevant from a public health perspective. Active compounds of nutritional interventions are often complex to investigate. In its most complex forms, nutritional interventions can contain living organisms like probiotic bacteria, supplements with mixtures of multiple ingredients and/or can consist of substantial modifications of the diet in general [16]. With regard to nutritional product interventions, they often contain ingredients that are already present in the daily food intake. Multiple components of the intervention may target different physiological pathways and their combined effect does not necessarily equal the sum of the effect of the individual ingredients; significant positive or negative interactions may occur. Nutrients are usually beneficial within a specific dose range and often, the intake of other nutrients needs to be optimized before the benefit of the nutrient under investigation can be established [14, 15 and 17]. When designing a clinical trial, challenges in capturing multiple physiological effects of nutrients and quantifying the level of interaction between multiple components of an intervention make it difficult to clearly identify at early stage the expected beneficial effects and related mechanism of action. It also complicates dose finding and makes it difficult to demonstrate the isolated benefit of the intervention. In practice, attempts to capture additive, synergistic or overlapping physiological effects often lead to clinical trials with multiple heterogeneous endpoints and the associated design challenges of multiplicity and appropriate sample size calculation. Small effect sizes of interventions are rather the rule than the exception. Most of the accepted endpoints for clinical trials are validated for the development of products that are intended to treat diseases; their usefulness in the target healthy population for nutritional claims is often questionable and rarely validated.

     Substantiating health benefits for nutritional interventions poses a number of challenges. We believe that this research framework requires exploratory trials with optimized design. In the present section, we selected 3 aspects of nutrition clinical research that create significant opportunities, but also limitation, for the application of adaptive designs in nutrition clinical research. The discussions addressed here can be applied in both academic and industry sponsored nutrition clinical research. Further reading on nutrition research specifically can be found with [14–19].

3.1  Limited learning phase in nutrition clinical development

Although the application of Good Clinical Practice guidelines and a strict follow up of study subjects are not negotiable, it is a fact that side effects and safety issues are less prevalent with nutrition compared to pharmaceutical interventions. This allows for a more flexible and faster clinical development process. In return, this leads to a more limited early learning phase and there are far less early development data available to inform the design of confirmatory studies. In such a context, there is a higher level of uncertainty and risk associated with the choice of the primary endpoint, the expected effect size and the variability of the response, but also with regard to the right dose of the active ingredients or the appropriate targeted population. It commonly leads to study designs attempting to answer multiple heterogeneous objectives. This increases the complexity of the trials, requiring very careful consideration of the sample size and tailor made statistical solutions to control the false positive error rate due to multiple hypotheses testing. Also, the often used approach of associating exploratory endpoints by nature to confirmatory objectives into a single trial can create a particular challenge with respect to controlling the statistical risks. Statistical requirements for confirmatory objectives trials are more stringent than for exploratory ones, and with such “one phase “ clinical trials one may need to find a good balance to control statistical risks without penalizing too much the trial.

This lack of learning phase is a good opportunity for a “learn and confirm” approach allowed by adaptive design methodology. In this context, different “learn and confirm” strategies can be used in a seamless way allowing to condense several phases of development, usually investigated in different trials, into one single adaptive trial with one protocol. The learning(s) and the confirmatory phase(s) are separated by interim looks with decision-making, thereby reducing the clinical development time and increasing the efficiency by combining information obtained from subjects of several phases in the final analysis. This inferential seamless approach (to differentiate from operational seamless) can be used in development for 1- selecting the most promising regimen of an intervention (according to the dose or duration for example) in the first exploratory stage, and 2- be able to conclude on the efficacy of the intervention in its most optimal form in the confirmatory stage. An example of phase II, III, IV adaptive design is described in [20]. An overview of seamless methods together with recommendations can be found in [21, 22]. See [12] for a discussion on opportunities of adaptivity in drug discovery and development process.

     Addressing multiple questions in the same trial may lead to complex adaptations with the associated challenges of controlling the Type I error and obtaining reliable and unbiased estimates. Adaptive trials are not a panacea for every uncertainty of the planning phase. In practice, it is important to make a realistic list of possible adaptations that takes into consideration the statistical, operational and regulatory constraints. Another aspect, not to be underestimated, is the fact that complex adaptations require non-negligible upfront preparation. Sometimes, it will be more beneficial to proceed in a lock step fashion and take the necessary time to explore the multitude of information that can be generated by a first exploratory trial, information that will be used later on to inform the design of a confirmatory trial.

3.2  Small effect size and large variability in the response to the intervention

In general, the beneficial effects of a nutrition intervention are small compared to what can be expected from a pharmaceutical compound. The expected beneficial effect is often close to the “noise” threshold of biological variability, which makes it more difficult to detect a significant difference. Most nutritional interventions target a generally healthy population, which substantially limits the margin for improvement when compared to a diseased population.

     Nutrition related-traits vary significantly between individuals. Dietary habits, lifestyle and constitutional factors generally change over time and vary between different cultures, populations and age groups. This results in a large inter and intra-individual variability with respect to nutrition requirements and response to nutritional intervention.

     These factors lead to the necessity of large trials, sufficiently powered to demonstrate a small health benefit against a large background variability. In addition, one would need to quantify or control the impact of numerous confounding factors related to environmental, behavioral and biological differences.

In some circumstances, small effect size with uncertainty on the hypothesis used to design the trial can set the scene for the use of Group Sequential Design (GSD) and/or Sample Size Re-assessment (SSR) strategy. The group sequential approach could provide the option to decrease the final sample size through realistic futility or efficacy boundaries. One could start with a conservative hypothesis designing the trial and decision rules at interim in such a way that the trial can be stopped earlier if the interim analysis reveals significant positive results (efficacy boundary crossed), or negative results (futility boundary crossed, safety concern or benefit/risk ratio not clearly favorable). Another approach could be to rely on a pure SSR strategy where the initial sample size of the trial is calculated based on optimistic hypotheses. Then one would upgrade the sample size at interim if the initial target is too optimistic but results are promising enough to invest in a larger sample size. A choice between GSD and SSR needs to take into account multiple aspects, including results of simulations. A relevant definition of positive, negative or promising results at interim and the selection of appropriate decision/success criteria are also critical aspects to decide on upfront the trial execution.

     Handling heterogeneity by increasing the sample size is one option. It is also possible to limit interventions to those subjects that are more likely to benefit from it. Enrichment strategies represent an attractive approach to address the challenge of heterogeneity [23]. For example: by using predictive markers as inclusion criteria or identifying a more responsive subgroup during the trial. The latter approach is called predictive enrichment and, while controlling the type I error, can substantially improve the power of the trial.

3.3 Different interests from different stakeholders: How to reconcile demands from regulatory authorities, the scientific community and the consumers

The regulatory environment for nutrition is substantially different from the one for medicinal products. Health Claims for (functional) foods are subject to a variety of regulations depending on the category to which the product belongs, e.g. food, dietary supplement, or medical food. Harmonization of regulatory requirements between countries is progressing but is less advanced than the regulations for drugs, especially regarding requirements for conducting clinical trials prior to the product launch. These aspects influence significantly the choice of study design and statistical methodology needed to build robust and reliable evidence that can convince different regulatory authorities. Other key stakeholders are the medical/nutrition scientific community and consumers. While the first group is more interested in the outcomes from a public health perspective, the consumers are primarily looking for a direct personal benefit for their health. In order to succeed in this complex landscape, evidence generated from a clinical research plan needs to be built through a multistage process that requires input from commercial, scientific and regulatory experts, with a critical need to achieve acceptance by the consumers [24, 25].

This environment where decision-making is subject to various short and midterm constraints and for which health benefits are often only detectable on the long term, is not always well adapted to the development of clinical trials deployed in a lock step fashion. Adaptive designs may help to improve the efficiency of a clinical research plan by improving flexibility, rapidity to get to the results and ability to integrate external sources of information.

     However speeding up the course of a trial is not always possible. The option for trial adaptation can become limited when the intervention duration is much longer than the recruitment period of the trial or when the endpoint of interest is assessed late in the trial process. For example demonstrating the benefit of nutrition in chronic conditions and/or as preventative measure is a long-term objective; and the lag time in observing a clinical benefit of a nutritional intervention often dictates a long trial duration. In this case, an adaptive design strategy relying on early readouts with biomarkers can still be interesting.

     Last, a more flexible and faster development process will not solve all the aspects of a complex and dynamic research environment. The different interests from the different stakeholders also impact the timeframe of development. A consumer will expect a rapid effect on his wellbeing and is less likely to adhere to a product if it takes months or years to see the difference. An improvement on the public health level, also requires a long-term strategy. It is often very difficult or impossible to combine objectives that take substantially different timeframes to achieve into the same clinical trial.

4. Adaptive Clinical Trials in practice: points to consider

To overcome some of the challenges in nutrition clinical trials, the use of adaptive design methods may be useful. In light of nutrition research specificities described before and to facilitate the understanding and implementation of successful adaptive designs in the nutritional field, this section aims at highlighting some general considerations and practical recommendations gained from academic and industry experience in the use of adaptive clinical trials over the past years.

4.1  Upfront assessment of trial strategies

Designing and implementing flexible trials usually requires more upfront preparation than traditional fix trial designs. It is strongly recommended that the rational, acceptability, feasibility, and potential impacts of the envisaged adaptations are carefully evaluated at the planning stage. In practice, the clinical trial team is encouraged to evaluate different trial options comparing scientific aspects, statistical operating characteristics, operational feasibility, bias implication, chance of success, timelines, possibility of messaging/communication but also financial implications. It is important to include in the assessment plausible clinical trial scenarios, covering pessimistic, expected, and optimistic cases. Furthermore, the trial design scenarios should not only cover flexible features but should also include an appropriate traditional fix design. In case the adaptive trial can replace several traditional trials, assessment should be done in light of an overall clinical development plan.

This evaluation will allow to weigh potential benefits against the challenges and the extra effort required by flexible design implementation. When comparing all various aspects of the trial options, the conclusion could be that implementing an adaptive design is not the most beneficial solution. Even if the solution of a fix design is finally retained, this assessment is generally highly beneficial for the trial or the clinical development plan.

Some of the elements mentioned above are detailed in the next sections. Further consideration on the planning phase of an adaptive trial can be found in [6, 26, 27].

4.2. Clinical trial simulations: quantitative assessment of design performance

There are many uncertainties before and after a trial adaptation. There is also a concern that performance of less well-understood designs is not well known because statistical methods are not yet fully developed. Clinical trial simulations should be conducted at the planning stage of the clinical trial to address these concerns and to provide enough evidence for informed decisions on the design of the trial.

Clinical trial simulation is a process that uses computing to mimic the conduct of a clinical trial by creating virtual patients to extrapolate (or predict) clinical outcomes for each of them [28]. When the adaptations are prospective, simulations could help in assessing biases and explore ways to correct them. It allows to fine-tune adaptation rules and evaluate the operating characteristics, validity, robustness and chance of success of the adaptive trial under various clinical trial assumptions. For example, the statistician could simulate clinical trials with different ranges of intervention effect or expected heterogeneity, drop out pattern and different timing of interim analysis. It is important to do this simulation exercise not only in case of an adaptive design, but also for the corresponding fixed trial design. It is likely that no design will be optimal for all aspects investigated and the cross functional team will have to define quantitative and measurable criteria on which to base design optimization (26).

Simulation and modeling activity leads to more carefully-thought trials [26, 27] and, in the last decade, has played an increasingly key role in improving efficiency of clinical trials. This process enables the project team to reflect deeply on the trial design and how the success of the trial should be defined. It helps in crystalizing discussions around quantitative measures on design performance rather than subjective points of view, raising earlier than usual technical and practical considerations that are too often lately addressed during the execution phase of the trial. The simulation exercise, being the only way to ensure appropriate design characteristics for complex adaptations, should be also performed when implementing well-understood adaptive design where operating characteristics can be derived analytically.

Readers can read more on what consists trial simulation in [26, 29 and 30].

4.3  Statistical perspective, from the design to analysis

Valid statistical methods are necessary to ensure the success of a clinical trial. Some topics have met with major controversy and can trigger – not only statistical – complex debates. Even if “ready-made” statistical/design solutions exist, it is important that the trial statistician invests time to really understand these methods in the context of the research project, with their pros and cons. The literature on the topic is large and still growing. Also, the statistician must be able to translate design features into understandable practical considerations that could be relevant to other functions in the clinical trial team. This is critical to facilitate the discussions during the assessment of different design options.

     Even for a single adaptive feature, there is no one-fits-all method. For example, if there are uncertainties on the assumptions made at the design stage, the trial statistician may want to investigate solutions that would allow changes on the sample size during the trial. There are many sample size modification strategies. First, the trial statistician will have to investigate and compare different approaches such as fully sequential, group sequential and sample size reassessment strategy. For the latter, one may have to choose between blinded sample size re-estimation for assessing the variability of the response or unblinded sample size re-estimation to assess the effect size of the intervention at interim. Each of these approaches underline different clinical trial strategies and has it own technical challenges and operational implications. Suppose that the trial team plans an unblinded sample size re-estimation: other layouts of methodological decisions will need to be discussed such as the method of re-estimation. According to the trial setting, some methods could be less conservative or more powerful than others in terms of sample size consumption. These aspects could be assessed with the help of simulations. The team could target also different objectives among, for example: 1- maintaining observed intervention effect (i.e., scientifically meaningful difference), 2- achieving conditional power targeting the original effect or 3- reaching desired reproducing probability. Challenges for the method of controlling type I error at the final stage could also influence the choice of the re-assessment methods as we can choose to either 1- perform non-standard analysis by modifying the test statistics or p-value boundaries or 2- perform a standard analysis at the end but state conditions under which sample size could be increased. The considerations highlighted above do no address the full picture of the sample size re-estimation methodology. These are just suggestions of first layouts of thinking to highlight the inherent technical complexities behind one single adaptation. A comprehensive summary on sample size re-estimation can be found in [31, 32].

Major adaptations or modifications to a trial, could 1- introduce operational bias/variation to data collection 2- result in a shift in the target population in terms of either location or scale parameter, and 3- lead to inconsistency between hypothesis to be tested and the corresponding statistical test. It is always interesting to investigate differences in results across stages (before and after adaptation) seeking for potential bias that might have been introduced. This investigation needs to be supported by statistical, operational and scientific views to delineate any bias from a natural drift of the trial population [26, 33 and 34].

     Overall, results obtained from complex adaptations are important to be scrutinized by the scientific community, especially when dealing with population enrichment or endpoints selection. It is important to be aware that under complex adaptive designs, valid statistical tests and the corresponding inferences are often difficult, if not impossible, to obtain. A major concern is the protection of Type I error rate, as naïve analysis in the presence of multiple looks and data driven changes usually inflates the false positive rate. For some adaptations, another statistical concern is how to obtain reliable parameter estimates, confidence intervals and correct p-values, combining data from subjects included prior and post interim looks [35, 36]. Note that adaptive designs conducted in early development do not necessarily require to meet the same statistical target or requirements than late phase confirmatory trials: they may focus less on the control of type I error, but more on obtaining unbiased estimates of the intervention effect.

4.4. Strategy for clinical operations: efficiency and bias control

Achieving the benefits of adaptive trials requires an effective operational strategy. Reasonable logistics effort and technological infrastructure should be in place for maintaining the integrity, quality, validity and efficiency of the intended adaptive trial. Operational bias can adversely affect critical decision making during the conduct of a trial (1) as well as the final interpretation of results. It is suggested to develop upfront a bias management plan that aims to identify, alleviate or eliminate, and control the operational biases.

To be able to make informed decisions at interim, data must be collected, monitored, cleaned, aggregated and analyzed with minimal delay. This can be greatly facilitated by electronic data capture (EDC) and real time data access. Compared to traditional design one will need to increase frequency of monitoring, data cleaning and study protocol deviations review. This effort has a cost, and this effort is part of an equation including timelines and resources. The goal for the interim analysis is: to get accurate and reliable data, in a timely fashion, with the right amount of effort. As “100% cleaned data” can be very difficult/impossible to meet, the team needs to focus on getting the best possible quality data, knowing the strengths and the potential limitation of the data. Careful consideration should be put on safety and data that are critical for decision-making at interim.

Selection of qualified study sites and appropriate supply structure is key to address potential recruitment and logistics challenges. For costly and/or complicated nutrition interventions, packaging and supply need to be optimized, especially when the design allows for dropping the inferior groups, for adaptive randomization or for sample size re-assessment. While assessing the feasibility of the adaptive trial design, expected recruitment rate is crucial in choosing the appropriate timing of the interim analysis.

The use of adaptive design methods may introduce so-called operational bias and/or variation, especially after the review of interim data. “All monitoring has potential action thresholds, whether implicit or explicit, and lack of action will generally imply that such threshold has not been reached” [37]. Operational bias often occurs when information extracted from an ongoing trial impacts the participant pool, investigator behavior, or other clinical aspects that affect the conduct of the trial, in such a way that conclusions about important safety or clinical benefit parameters are biased. To limit the possible inferences from observation of any mid-trial changes, one solution is to limit upfront information shared and give the right level of access to information to the right persons. Although the statistical details are key for the success of the adaptation, the protocol can stay general on the decision making rules. Details can be left for other documents with a more limited circulation (non-accessible for trial participants and extended project team), such as simulation reports and interim Statistical Analysis Plan. Some type of adaptations are more sensitive than others to the problem of information convey, and potential bias that may arise should be taken into consideration and balanced against integrity and interpretability of the trials. More details on this particular aspect can be found in [37].

Procedural considerations need also to be thought of upfront during the planning phase. This refers to the decision process and dissemination of information. Pivotal aspects to address are the establishment of clear data, information and decision flows, and the implementation of a Data Monitoring Committee (see next section).

Further consideration on operational challenges can be found in [37–39].

4.5. Data Monitoring Committee (DMC)

For adaptive clinical trials, it is strongly suggested that an independent DMC is established to serve as a guardian for integrity, quality, and validity of the intended clinical trial. The DMC, independent of any activities related to clinical operations of the study, is composed of experienced medical, scientific and statistical members. It is important to ensure that all relevant expertise is represented in the committee; but it seems advisable that the analysis, review and decision making roles remain in the hands of a limited number of individuals [37]. Depending on the study objectives and needs of the sponsor, the primary responsibility of the independent DMC is to ensure the validity and integrity of the clinical trial by performing ongoing safety monitoring, as well as by being involved in an interim analysis for evaluation of health benefits. The independent DMC performs its function and activity according to a written charter, which is usually developed and approved by the sponsor, the investigator and the DMC. This charter outlines the “rules of the game” by describing clear decision rules in order to avoid subjective and inappropriate decision by the DMC; acknowledging the fact that the DMC could take critical decisions based on unexpected trial events not anticipated in the charter. In practice, there is a separate team supporting the functions and activities of the DMC. The DMC support staff is responsible for performing an unblinded interim analysis and presenting the results to the DMC.

The most critical issue regarding the DMC is its true independence. To ensure the integrity/success of the clinical trial, the DMC must remain independent of the project team in order to provide a fair and unbiased recommendation based on the interim data. It should be noted that there is a discussion regarding whether we should add an additional burden on a existing DMC or establishing a separate DMC in order to monitor scientific validity and integrity of the clinical trials utilizing adaptive design methods.

Further reading on the role and responsibilities of the DMC can be found in [1, 40–43].

4.6. Computational solution

Statistical methods for the design and analysis of adaptive trials often pose computational challenges which result in the need of appropriate software solutions. Through academic and industry contribution, progress has been made on developing computational solutions in the last decade. Commercial software packages providing tools for planning, simulation, and analysis are available such as ADDPLAN and East. Existing SAS procedures are limited to the design and analysis of group sequential design, but SAS macros or SAS/IML macros can be found for example in [44, 45]. A simple search on the CRAN (Comprehensive R Archive Network) reveals an interesting number of packages, including simulation features. Great scope of R-programs, together with SAS programs, can be found also in [46]. A complete review of existing solutions is beyond the scope of this section; helpful reviews on this topic can be found in [47, 48].

Computational solution deploying new statistical methodology, addressing more complex adaptations or increasing the efficiency of existing solutions, can take time to be implemented and be available for the trial statistician. Often in practice, homemade programming is required to develop a tailor-made solution.

5. Concluding Remarks

Although introducing flexibility during the conduct of nutrition clinical trials is very attractive, 3 major questions inevitably arise. First, does the scientific and statistical validity of the trial remain intact after the intended modifications? Second, does the adapted design still meet the regulatory requirements to demonstrate the targeted nutritional health benefit? Third, does the clinical trial still address its original objectives after significant modification of the trial procedures? These questions should not only be addressed at the individual trial level, but it would be desirable that the regulatory and scientific communities develop guidelines on how to use the adaptive design methods in the nutrition clinical research and development process. Adaptive design methods have been used with records of success in the review/approval process of pharmaceutical products. However, the use of adaptive design methods in clinical trials conducted in nutritional research is not yet well established. The authors hope that this manuscript will contribute to a better understanding and acceptance of adaptive design methodology by the scientific community of nutrition research and will help in designing more efficient and ethical clinical trials.

     We acknowledge that walking the path of Adaptive Design will not be without obstacles, especially for clinical teams that are not routinely involved in this type of designs. Implementation and execution of adaptive designs represent a number of operational and technical difficulties that are not always easy to overcome. These issues, as well as a more general resistance to change, have hampered concrete adoption of the methodology in the past. See [27, 49 and 50] for an industry and academic perspective on the topic. Nevertheless, experience gained from concrete and meaningful implementations of flexible designs in other research areas should greatly help a beneficial transition to the adaptive mindset for nutrition research. For an organization, aiming to upgrade its environment to suit adaptive design implementation and execution, it will require to substantially review its existing clinical trial practices. Soliciting the assistance of experienced external partners (Contract Research Organization or academic group) may help to accelerate the progress through the learning curve. Readers are encouraged to go through references pointed in this manuscript, it should be of a great help if one wants to progress and raise its awareness on several aspects of flexible designs.

“A good design is the one that provides scientific validity and integrity and uses information derived from patients in the most intelligent way to make appropriate inferences at the earliest time point”[26]. Adaptive clinical trial designs with a “learn and confirm” approach fit perfectly to this definition. Using the information per subject in the most intelligent way, the methodology can have a great transformational impact on nutrition research. In addition, increased awareness of adaptive design seems to contribute to a better implementation and execution of traditional trials [27]. Indeed, recommendations for a successful implementation of adaptive design such as, the need of a well prepared planning phase, the assessment of trial options with a cross functional team, simulation-based evaluation of trial operating characteristics, quantitative comparison of design options, efficient data collection and cleaning, the need to optimize procedural and logistics plans, play also an important role in the success of traditional clinical trial designs.

     However, it should be clear that adaptive designs will not provide the solution to all the challenges of nutritional clinical trials. It should be part of a broad mindset that does not limit itself to randomized clinical trials as the sole evidence to demonstrate health benefits of nutrition. In the past decade, the range of targeted health benefits explored through nutrition intervention has significantly widened. This fast growing ambition is outpacing the rate of development of clinical trial methodologies that are adequately tailored to the needs of nutrition research. Although randomized clinical trials will remain the cornerstone for clinical evidence, nutritional clinical research could substantially benefit from other types of methodologies. In that respect, we can mention epidemiological studies (i.e. large observational studies and cohorts), pragmatic/large simple trial approach, N=1 trials, data mining technics, Bayesian approach, modeling and simulation of clinical trials and translational statistics. Also an increased level of fundamental research to better understand the physiology of nutrition and the development of better predictive health related biomarkers are needed. They will not only complement the evidence from adaptive design trials but will provided important knowledge that will allow better informed adaptive designs for clinical trials in nutrition research.

     Innovation in clinical research methodology will be essential to further improve the future standards for nutritional health benefit research. Together with evidence from other research methodologies, adaptive design methods and mindset offer an important opportunity to substantially raise the level of nutritional health benefit evidence beyond what is possible with traditional randomized controlled trials.

Acknowledgement

We thank Andreas Rytz (Statistician, Nestlé Research) for his in-depth review of the manuscript and constructive comments. We thank Stephane Collet (Head of clinical operation, Nestlé Research) for having supported this work from the beginning. We thank all the people from the Nestlé Clinical Development Unit that have contributed directly or indirectly to the progress of this manuscript.

 The author’s responsibilities were as follows — JT initiated the manuscript idea, wrote the manuscript and had primarily responsibility for the final content of the manuscript. All authors significantly contributed to the writing, provided critical intellectual content and were involved in the outlining, drafting, reviewing and in the final approval of the manuscript.

Authors have no conflicts of interest to disclose.

Sources of support: No financing to disclose

Clinical Trial Registry number: Not applicable

Health research reporting checklist, participant flow chart: Not applicable

Abbreviations:

US

United Sates

FDA

Food and Drug Administration

PhRMA

Pharmaceutical Research Manufacturer Association

CDER

Center for Drug Evaluation and Research

CBER

Center for Biologics Evaluation and research

GSD

Group Sequential Design

SSR

Sample Size Re-assessment

CRAN

Comprehensive R Archive Network

References

  1. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research (2010). Draft guidance for industry: Adaptive design clinical trials for drugs and biologics. Rockville.
  2. Chow SC, Chang M, Pong A (2005) Statistical consideration of adaptive methods in clinical development. J Biopharm Stat 15: 575–91.
  3. Gallo P, Chuang-Stein C, Dragalin V, Gaydos B, Krams M, Pinheiro J (2006) PhRMA Working Group. Adaptive designs in clinical drug development–An executive summary of the PhRMA working group. J Biopharm Stat 16: 275–83.
  4. European Medicines Agency, Committee for Medicinal Products for Human Use (CHMP). Point to consider on Methodological issues in confirmatory clinical trials with flexible design and analysis plan. CPMP/EWP/2459/02. London, UK, 2002.
  5. European Medicines Agency, Committee for Medicinal Products for Human Use (CHMP). Reflection paper on Methodological issues in confirmatory clinical trials with flexible design and analysis plan. CPMP/EWP/2459/02. London, UK, 2006.
  6. Chow SC, Chang M (2008) Adaptive design methods in clinical trials – a review. Orphanet J Rare Dis 3: 11.
  7. Chow SC, Chang M (2011) Adaptive design methods in clinical trials. 2nd ed. New York, NY: Chapman and Hall/CRC Press, Taylor and Francis.
  8. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Devices and Radiological Health, Center for Biologics Evaluation and Research (2015) Draft guidance for industry and food and drug administration staff: Adaptive designs for medical device clinical studies. Rockville.
  9. Miller E, Gallo P, He W, Kammerman LA, Koury K, Maca J,et al. (2017) DIA’s Adaptive Design Scientific Working Group (ADSWG): Best practices case studies for “less well-understood” adaptive designs. Ther Innov Regul Sci 51: 77–88.
  10. Chow SC, Corey R (2011) Benefits, challenges and obstacles of adaptive clinical trial designs. Orphanet J Rare Dis 6: 79. [crossref]
  11. Krams M, Sharma A, Dragalin V, Burns DD, Fardipour P, Padmanabhan SK, Perevozskaya I, Littman G, Maguire R. Pharm Med 3: 139–48.
  12. Bretz F, Branson M, Burman CF, Chuang-Stein C, Coffey CS (2009) Adaptivity in drug discovery and development. Drug Dev Res 70: 169–90.
  13. Legocki LJ, Meurer WJ, Frederiksen S, Lewis RJ, Durkalski VL, et al. (2015) Clinical trialist perspectives on the ethics of adaptive clinical trials: A mixed-methods analysis. BMC Med Ethics 16: 27.
  14. Blumberg J, Heaney RP, Huncharek M, Scholl T, Stampfer M, et al. (2010) Evidence-based criteria in the nutritional context. Nutr Rev 68: 478–484. [crossref]
  15. Gallagher AM, Meijer GW, Richardson DP, Rondeau V, Skarp M, et al. (2011) International Life Sciences Institute Europe Functional Foods Task Force. A standardised approach towards PROving the efficacy of foods and food constituents for health CLAIMs (PROCLAIM): Providing guidance. Br J Nutr 106: 16–28.
  16.  Schmitt JA, Bouzamondo H, Brighenti F, Kies AK, Macdonald I, et al. (2012) The application of good clinical practice in nutrition research. Eur J Clin Nutr 66: 1280–1281. [crossref]
  17. Heaney RP (2008) Nutrients, endpoints, and the problem of proof. J Nutr 138: 1591–1595. [crossref]
  18. Welch RW, Antoine JM, Berta JL, Bub A, de Vries J, et al. (2011) Guidelines for the design, conduct and reporting of human intervention studies to evaluate the health benefits of foods. Br J Nut 106 : 3–15.
  19. Hébert JR, Frongillo EA, Adams SA, Turner-McGrievy GM, Hurley TG, et al. (2016) Perspective: Randomized controlled trials are not a panacea for diet-related research. Adv Nutr 7: 423–32.
  20. Filozof C, Chow SC, Dimick-Santos L, Chen YF, Williams RN, et al. (2017) Clinical endpoints and adaptive clinical trials in precirrhotic nonalcoholic steatohepatitis: Facilitating development approaches for an emerging epidemic. Hepatol Commun 1: 577–85.
  21. Maca J, Bhattacharya S, Dragalin V, Gallo P, Krams M (2006) Adaptive seamless phase II/III designs – Background, operational aspects, and examples. Drug Inf J 4: 463–73.
  22. Chow SC, Lin M (2015) Analysis of two-stage adaptive seamless trial design. Pharm Anal Acta 6: 341.
  23. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Devices and Radiological Health (2012) Draft guidance for industry: Enrichment strategies for clinical trials to support approval of human drugs and biological products. Rockville.
  24. Jones PJH, Jew S (2007) Functional food development: Concept to reality. Trends Food Sci Technol 18: 387–90.
  25. Siró I, Kápolna E, Kápolna B, Lugasi A (2008) Functional food. Product development, marketing and consumer acceptance–a review. Appetite 51: 456–467. [crossref]
  26. Gaydos B , Anderson KM, Berry D, Burnham N, Chuang-Stein C, et al. (2009) Good practices for adaptive clinical trials in pharmaceutical product development. Drug Info J 43: 539–56.
  27. Quinlan J, Gaydos B, Maca J, Krams M (2010) Barriers and opportunities for implementation of adaptive designs in pharmaceutical product development. Clin Trials 7: 167–173. [crossref]
  28. Li HI, Lai PY (2010) Clinical trial simulation. In: Chow SC. Encyclopedia of Biopharmaceutical Statistics. 3rd ed. New York, NY: Chapman and Hall/CRC Press, Taylor and Francis 282–84.
  29. Burton A, Altman DG, Royston P, Holder RL (2006) The design of simulation studies in medical statistics. Stat Med 25: 4279–4292. [crossref]
  30.  Westfall PH, Tsai K, Ogenstad S, Tomoiaga A, Moseley S, et al. (2008) Clinical trials simulation: a statistical approach. J Biopharm Stat 18: 611–630. [crossref]
  31. Chuang-Stein C, Anderson K, Gallo P, Collins S (2006) Sample size re-estimation: A review and recommendations. Drug Info J 40: 475–84.
  32. Pritchett YL, Menon S, Marchenko O, Antonijevic Z, Miller E, et al. (2015) Sample size re-estimation designs in confirmatory clinical trials – Current state, statistical considerations, and practical guidance. Stat Biopharm Res 7: 309–21.
  33.  Gallo P, Chuang-Stein C (2009) What should be the role of homogeneity testing in adaptive trials? Pharm Stat 8: 1–4. [crossref]
  34. Chow SC, Chang M (2003) Protocol amendment. In: Chow SC, Chang M. Adaptive design methods in clinical trials. 2nd ed. New York, NY: Chapman and Hall/CRC Press, Taylor and Francis, 2011: 23–38.
  35. Brannath W, Koenig F, Bauer P. Improved repeated confidence bounds in trials with a maximal goal. Biom J 45: 311–24.
  36. Posch M, Koenig F, Branson M, Brannath W, Dunger-Baldauf C, et al. (2005) Testing and estimation in flexible group sequential designs with adaptive treatment selection. Stat Med 24: 3697–714.
  37. Gallo P (2006) Operational challenges in adaptive design implementation. Pharm Stat 5: 119–124. [crossref]
  38. Quinlan JA, Krams M (2006) Implementing adaptive designs logistical and operational considerations. Drug Inf J 40: 437–44.
  39. He W, Gallo P, Miller E, Jemiai Y, Maca J, et al. (2017) Addressing challenges and opportunities of “less well-understood” adaptive designs. Ther Innov Regul Sci 51: 60–68.
  40. International Conference on Harmonisation Expert Working Group (1998) ICH harmonized tripartite guideline: statistical principles for clinical trials. Federal Register 63: 49583–98.
  41. European Medicines Agency, Committee for Medicinal Products for Human Use (CHMP (2005). Guideline on Data Monitoring Committees. CHMP/EWP/5872/03. London, UK.
  42. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Biologics Evaluation and Research, Center for Drug Evaluation and Research, Center for Devices and Radiological Health (2006) Guidance for clinical trial sponsors: Establishment and operation of clinical trial data monitoring committees. Rockville, MD: FDA.
  43. Herson J (2016) Data and safety monitoring committees in clinical trials. 2nd ed. Boca Raton, FL: Chapman and Hall/CRC Press, Taylor & Francis.
  44. Bretz F, Koenig F, Brannath W, Glimm E, Posch M (2009) Adaptive designs for confirmatory clinical trials. Stat Med 28: 1181–1217. [crossref]
  45. Menon SM, Zink RC (2015) Modern approaches to clinical trials using SAS®: Classical, adaptive, and Bayesian methods. Cary, NC: SAS Institute.
  46. Chang M (2014) Adaptive design theory and implementation using SAS and R. 2nd ed. Boca Raton, FL: Chapman and Hall/CRC Press, Taylor and Francis.
  47. Tymofyeyev Y (2014) A review of available software and capabilities for adaptive designs. In: He W, Pinheiro J, Kuznetsova OM. Practical considerations for adaptive trial design and implementation. NewYork, NY: Springer 139–55.
  48. Bauer P, Bretz F2,3, Dragalin V4, König F1, et al. (2016) Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med 35: 325–347. [crossref]
  49. Coffey CS, Levin B, Clark C, Timmerman C, Wittes J, et al. (2010) Overview, hurdles, and future work in adaptive designs: Perspectives from a National Institutes of Health-funded workshop. Clin Trials 9: 671–80.
  50. Morgan CC, Huyck S, Jenkins M, Chen L, Bedding A, et al. (2014) Adaptive Design: Results of 2012 survey on perception and use. Ther Innov Regul Sci 48: 473–81.

Article Type

Review Article

Publication history

Received: May 02, 2018
Accepted: May 18, 2018
Published: May 24, 2018

Citation

Jérôme Tanguy, Rafael Crabbé, Laura Gosoniu and Shein-Chung Chow (2018) Challenges in Nutritional Clinical Trials: How Can Adaptive Design Be of Help. Nutr Res Food Sci J Volume 1(1): 1–10. DOI: 10.31038/NRFSJ.2018112

Corresponding author

Jérôme Tanguy
Nestlé Research Center,
Clinical Development Unit,
Route du Jorat 57,
1000 Lausanne, Switzerland;
Tel: +41217858998;