In digital forensics, process analysis and examination phases are the two most crucial stages. Compounds, volume, diversity, and interconnection across content are the essential challenges associated with these two stages of working with big data. For massive amounts of data, most of the current forensic tools take a comparatively longer time, and the evidence identification is not appropriate in cases where more than one resource is incorporated with a single case (Mohammed, Clarke & Fu). So This paper is based on a quantitative methodology to investigate the solutions for digital forensic analysis of diversity data using Artificial Intelligence.
Literature Review:
Correlation of evidence has been approached already as follows:
In a criminal examination, case-importance would be the property of any snippet of data, which is utilized to quantify its capacity to answer the insightful such as 'when, what, who, where, how' and why inquiries in a criminal examination. The degrees of significance denned by their work go from Absolutely Irrelevant to Probably Case-Relevant (Gong, Chan, & Gaertner, 2005). So the further enhancements like someone physically reviewing the convicts solved by this method should be addressable.
The case for Distributed Digital Forensics and present a couple of circumstances where the lawful work can't be played out any more drawn out on single workstation. They also propose a flowed structure and some display results that show the upsides of the passed on approach, which would, in like manner, enable logically present-day examination systems. However, the tool used doesn't employ the process distribution (Ganesh, 2017).
Data Collection Methods:
The tools we used in this approach are such as MultiAgent Digital Investigation Toolkit (MADIK). This is an intelligent tool that uses CBR to find suitable agents to find the evidence. This uses some of the state department frameworks like JADE. Data from different agencies from the blackboard examples are used to gather the data in this paper. We have also measured our variables from the real-time investigation data from various agencies, which consist of HashSet examinations (Ganesh, 2017 ).
The methodology we addressed in the paper has experimented over more massive and real-time data sets. The appraisal of these affirmations by the structure took around two hours using one machine. The time required by two human investigators to play out an equivalent examination was of about 24 hours. This time doesn't consider the time needed for the quantifiable contraptions to recover leftovers, generate thumbnails, or learn hashes (Ganesh, 2017 ). As mentioned above, we have experimented with this design and the tools for two real-time investigation approaches and found the correlated data, which helped to find the evidence and analyze the information efficiently in a faster method.
Quantitative Methodology With Artificial Intelligence. (2022, Apr 18).
Retrieved November 21, 2024 , from
https://studydriver.com/quantitative-methodology-with-artificial-intelligence/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer