Zur Webseite der Informatik

Development Project

Online Performance Problem Detection, Diagnosis, and Visualization

8SWS/12LP (Grunske, van Hoorn)

Stand: WS 2013/14

Important Dates

A detailed schedule is provided in the ILIAS system (password-protected)

  • 1st Meeting: Tue, Nov. 12, 2014 @ 15:45–17:00 in Room 1 .049
  • Weekly meeting (bi-weekly wconith supervisors): Mon @ 11:30-13:00 in Room 1.049
  • Exams: April 7-11, 2014 (to be detailed)

Summary

Continuous monitoring of software systems serves to collect, analyze, and visualize performance measures, including arrival rates, response times, resource utilization. Manual analysis of performance data, e.g., to detect and diagnose performance problems, is time-consuming and error-prone. Hence, automatic approaches are needed, which continuously process the data and provide meaningful information for system maintainers.

The topic of this development project is the online detection, diagnosis, and visualization of performance problems employing time series analysis including information about the architecture of the controlled software system. It is performed in the context of the Kieker framework for application performance monitoring and dynamic analysis [1, 2, 3].

The goals of this project are two-fold:

  1. The existing OPAD [4, 5]1 approach shall be extended, e.g., by i. root cause analysis employing architectural information on calling dependencies (mandatory; based on the RanCorr approach [6]) ii. correlation of multiple measures (response times and workload), iii. additional forecasting algorithms (e.g., [7, 8]), iv. auto-con figuration and self-tuning capabilities.
  2. The (extended) OPAD approach shall be integrated into an appropriate Web-based user interface, which provides confi guration and live visualization capabilities (in form of interactive hierarchical dependency graphs and time series plots). The technical foundation is provided by Kieker's WebGUI [9], which may need to be extended.

It is expected to perform systematic quantitative experiments to evaluate the project results. These experiments can be based on a combination of synthetic data, publicly available real data, and lab experiments (e.g., using the Web-based JPetStore contained in the Kieker release or CoCoME [10]). A list of relevant tickets in Kieker's issue tracking system, which include helpful information, will be provided. Morever, the results of last semester's development project [11], called KARMA, shall be considered. The Kieker live demo may provide additional technical input. Morever, state-of-the art Application Performance Monitoring tools [12], such as AppDynamics, should be investigated.

Literature

  1. Kieker Project, "Kieker web site", http://kieker-monitoring.net, 2012.
  2. A. van Hoorn, M. Rohr, W. Hasselbring, J. Waller, J. Ehlers, S. Frey, and D. Kieselhorst, "Continuous monitoring of software services: Design and application of the Kieker framework," Tech. Rep. TR-0921, Department of Computer Science, University of Kiel, Germany, Nov. 2009.
  3. A. van Hoorn, J. Waller, and W. Hasselbring, "Kieker: A framework for application performance monitoring and dynamic software analysis," in Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering (ICPE 2012), pp. 247-248, ACM, Apr. 2012.
  4. T. C. Bielefeld, "Online performance anomaly detection for large-scale software systems," Mar. 2012. Diploma Thesis, Kiel University.
  5. T. Frotscher, "Architecture-based multivariate anomaly detection for software systems," Oct. 2013. Master's Thesis, Kiel University.
  6. N. S. Marwede, M. Rohr, A. van Hoorn, and W. Hasselbring, "Automatic failure diagnosis support in distributed large-scale software systems based on timing behavior anomaly correlation," in Proceedings of the 13th European Conference on Software Maintenance and Reengineering (CSMR 2009), pp. 47-57, IEEE Computer Society, Mar. 2009.
  7. N. R. Herbst, N. Huber, S. Kounev, and E. Amrehn, "Self-adaptive workload classi cation and forecasting for proactive resource provisioning," in Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), pp. 187-198, 2013.
  8. A. Amin, A. Colman, and L. Grunske, "An approach to forecasting QoS attributes of web services based on arima and garch models," in Web Services (ICWS), 2012 IEEE 19th International Conference on, pp. 74-81, IEEE, 2012.
  9. N. C. Ehmke, "Everything in sight: Kieker's WebGUI in action (tutorial)," in Proceedings of the Symposium on Software Performance: Joint Kieker/Palladio Days (KPDAYS '13), 2013.
  10. S. Herold, H. Klus, Y. Welsch, C. Deiters, A. Rausch, R. Reussner, K. Krogmann, H. Koziolek, R. Mirandola, B. Hummel, M. Meisinger, and C. Pfaller, "CoCoME - the common component modeling example," in CoCoME, vol. 5153 of Lecture Notes in Computer Science, pp. 16-53, Springer, 2008.
  11. T. Kuhn, H. V. Le, P. Scheide, P. Strobel, C. Waldvogel, K. Wenz, and N. Wolter, "KARMA: Kieker Analysis Repository Metamodel Application," July 2013. Master's development project. University of Stuttgart, Institute of Software Technology, Germany.
  12. J. Kowall and W. Cappelli, "Gartner's magic quadrant for application performance monitoring," 2013.

Additional information and material is provided in the ILIAS system (password-protected)

ILIAS

Course material, announcements, etc. will be provided via the e-learning platform ILIAS.

Please join the ILIAS course "Development Project (WS 13/14): Online Performance Problem Detection, Diagnosis, and Visualization ...".