DISCOVER COMPUTING, cilt.29, sa.1, 2026 (SCI-Expanded, Scopus)
This study presents a comparative performance evaluation of state-of-the-art generative artificial intelligence models in the context of data analysis. Eight large language models (Claude, Gemini, ChatGPT, Qwen, Grok, DeepSeek, LLaMA, and Mistral) were tested on 13 distinct analytical tasks derived from the Titanic dataset. Performance was assessed using a multidimensional scoring rubric consisting of five main categories-technical accuracy, analytical depth, machine learning application, presentation and communication, and originality-with a total of 14 sub-criteria. Each model's output was rated on a five-point scale by independent evaluators. Results indicate that Claude and Gemini outperformed others, particularly in tasks requiring reasoning and transparency, while LLaMA and Mistral showed weaknesses in higher-order cognitive tasks. Overall, the findings provide theoretical insight into the cognitive capacities of generative artificial intelligence models in data-driven contexts and offer practical guidance for model selection in applied analytics.