The Internet of Things (IoT) and Artificial Intelligence (AI) systems have become hot growth areas in techno-capitalism, which will drive the modern economic world. While these two are different, they rely on each other and share similarities in software Verification, Validation (V&V) and testing. Engineers are busy creating a connection to the internet for nearly every electronic device, meaning that many “”things”” or devices that were never before connected will now be. This situation presents a “”boundary issue”” of where to stop testing for teams. Many IoT managers and stakeholders may wish to limit the testing of these devices or their connections to control costs by only covering the IoT device software. Their rationale being the “”thing”” or “”system”” may be well known, and only the IoT software is “”new””. While at the same time, AI is spreading to every system, particularly the area of IoT, because of their high volumes of data generation and the need for test automation. This presentation considers several important system-level V&V/test activities and the tester’s technical skills to support the growing world of software system testing.
In most cases, the IoT device, software, edge interfaces, and finally, the cloud elements of the IoT system will need to be tested. Simplistic, manual testing of IoT needs to evolve into complete Verification and Validation (V&V) during the system lifecycle while often including levels of independence in the test team organization. Testing activities such as analysis, reviews, inspections, modelling, structural tests, functional tests, security tests, AI-test automation, etc., will need to be appropriately budgeted and then allocated with testing plans and strategies to address the risks of IoT and AI systems testing beyond just the IoT device itself. Once testers move from basic IoT testing, AI becomes a consideration that must be included in the test planning and V&V activities.
Traditional manual testing will have limited application in these new domains compared to test automation using Artificial Intelligence (AI) and data analytics (“”a method of logical analysis of large amounts of information,”” Merriam-Webster) to drive the V&V/test evaluations. Testers will benefit from learning and practicing advanced engineering skills to support new testing activities. Testers must build on their traditional software skills and then expand their use of newer software, hardware, and system engineering concepts. Test practitioners that can master advanced testing skills such as automation and AI will be in demand while finding the new IoT/AI world challenging, exciting and fun.
Jon Hagar is a senior tester with over 40 years in software development and testing. He has supported software product design, integrity, integration, reliability, measurement, verification, validation, and testing on various projects and software domains (environments). He has an M.S. degree in Computer Science with a specialization in Software Engineering and Testing from Colorado State University and a B.S. Degree in Math with specializations in Civil Engineering and Software from Metropolitan State College of Denver, Colorado. Jon has worked in business analysis, systems, and software engineering, specializing in testing, verification and validation. Projects he has supported include the domains of embedded, mobile devices, IoT, and PC/IT systems, as well as test lab and tool development. Currently, Jon works as a consultant for Grand Software Testing, LLC.
Jon has been a member of numerous professional organizations, including the Association of Software Test (AST), IEEE (25 years), ISO and Object Modeling Group (OMG). He is an author and IEEE project editor on ISO29119 Software Test Standards, Testing AI Software, member of IEEE 1012 V&V plan standard working group, 1028 reviews, IEEE 982.1 support, and Co-chair on OMG Unified Modeling Language testing profile (UTP 2.0) standard as well as a voting member on many other standards. He is a past member of professional society boards: Denver-QA and American Software Test Qualification Board (ASTQB).
Jon has taught hundreds of classes and tutorials in software engineering, systems engineering, and testing throughout the industry and universities. He has published numerous articles on software reliability, testing, test tools, formal methods, mobile and embedded systems. Jon has supported reviews/audits for many companies in CMM, CMMi, testing, Agile, exploratory testing, and process improvement.
Jon is the author of the books: Software Test Attacks to Break Mobile and Embedded Devices, IoT Development and Testing and a Children’s book, and contributor to Agile testing and test automation books. Jon makes presentations regularly at industry working groups and conferences. He holds a patent on web test technologies.
The Test Tribe is a leading global Software Testing Community (proudly Asia’s Largest) turned EdTech Startup. Started in 2018 with a mission to give Testing Craft the glory it deserves while we co-create Smarter, Prouder, more confident Testers.
We take pride in creating unique global Events, Online Community spaces, and eLearning platforms where Software Testers across the globe collaborate, learn and grow.
We intend to be a one-stop destination of choice for Testers across the globe for their upskilling and community needs.