YucongDuan的个人博客分享 http://blog.sciencenet.cn/u/YucongDuan

博文

Running Data Tests for DIKWP Artificial Consciousness (初学者版)

已有 787 次阅读 2024-10-24 11:41 |系统分类:论文交流

 Running Data Tests for DIKWP Artificial Consciousness System 

Yucong Duan

International Standardization Committee of Networked DIKWfor Artificial Intelligence Evaluation(DIKWP-SC)

World Artificial Consciousness CIC(WAC)

World Conference on Artificial Consciousness(WCAC)

(Email: duanyucong@hotmail.com)

Let's move forward with running the tests for the DIKWP Artificial Consciousness System based on Prof. Yucong Duan's Consciousness "Bug" Theory. We'll simulate human-like cognition by focusing on hypothesis-making, abstraction, and effectively handling incomplete, imprecise, and inconsistent data (the 3-No Problem). We'll go through each test scenario step by step, providing detailed code examples and explanations.

Testing Plan Overview

We'll conduct tests in the following scenarios:

  1. Incomplete Data

  2. Imprecise Data

  3. Inconsistent Data

For each scenario, we'll:

  • Process the data through each module (Data, Information, Knowledge, Wisdom, Purpose).

  • Observe and analyze the outputs at each stage.

  • Validate the system's ability to handle the 3-No Problem.

  • Ensure alignment with the defined purpose.

Test Scenario 1: Incomplete Data1. Input DatapythonCopy code# Data points with missing 'size' attributedata_points = [     {'id': 1, 'color': 0.90},  # Red apple, size missing     {'id': 2, 'color': 0.30, 'size': 0.85},  # Green apple     {'id': 3, 'color': 0.10},  # Yellow apple, size missing]2. Data Module (D)Hypothesis Generation

We need to fill in the missing 'size' attribute by generating hypotheses.

pythonCopy codedef generate_hypothesis(data_point, missing_attribute, existing_data):    # Use the average of existing values for the missing attribute     available_values = [dp[missing_attribute] for dp in existing_data if missing_attribute in dp]    if available_values:         hypothesis_value = sum(available_values) / len(available_values)    else:         hypothesis_value = 0.0  # Default value if no data is available     # Add the hypothesized value to the data point     data_point[missing_attribute] = hypothesis_value    return data_point# Process the data to generate hypotheses for missing attributesfor data_point in data_points:    if 'size' not in data_point:         data_point = generate_hypothesis(data_point, 'size', data_points)

Processed Data Set with Hypotheses:

pythonCopy code[     {'id': 1, 'color': 0.90, 'size': 0.85},  # Hypothesized size     {'id': 2, 'color': 0.30, 'size': 0.85},     {'id': 3, 'color': 0.10, 'size': 0.85},  # Hypothesized size]Data AbstractionpythonCopy codedef abstract_data(data_point, key_attributes):    # Focus on key attributes     abstracted_data_point = {key: data_point[key] for key in key_attributes if key in data_point}    return abstracted_data_point key_attributes = ['id', 'color', 'size'] D_subj = [abstract_data(dp, key_attributes) for dp in data_points]

Abstracted Data Set (D_subj):

pythonCopy code[     {'id': 1, 'color': 0.90, 'size': 0.85},     {'id': 2, 'color': 0.30, 'size': 0.85},     {'id': 3, 'color': 0.10, 'size': 0.85}, ]3. Information Module (I)Extracting Subjective DifferencespythonCopy codedef Delta_subj(data_point_i, data_point_j, attribute_key):    # Calculate the difference between two data points for a specific attribute     if attribute_key in data_point_i and attribute_key in data_point_j:         difference = abs(data_point_i[attribute_key] - data_point_j[attribute_key])    else:         difference = None  # Handle missing data     return difference I_subj = []# Extract differences for 'color' and 'size'attributes = ['color', 'size']for attr in attributes:    for i in range(len(D_subj)):        for j in range(i + 1, len(D_subj)):             data_point_i = D_subj[i]             data_point_j = D_subj[j]             difference = Delta_subj(data_point_i, data_point_j, attr)             I_subj.append({                'data_point_i': data_point_i['id'],                'data_point_j': data_point_j['id'],                'attribute': attr,                'difference': difference,             })

Subjective Information Set (I_subj):

pythonCopy code[     {'data_point_i': 1, 'data_point_j': 2, 'attribute': 'color', 'difference': 0.60},     {'data_point_i': 1, 'data_point_j': 3, 'attribute': 'color', 'difference': 0.80},     {'data_point_i': 2, 'data_point_j': 3, 'attribute': 'color', 'difference': 0.20},     {'data_point_i': 1, 'data_point_j': 2, 'attribute': 'size', 'difference': 0.00},     {'data_point_i': 1, 'data_point_j': 3, 'attribute': 'size', 'difference': 0.00},     {'data_point_i': 2, 'data_point_j': 3, 'attribute': 'size', 'difference': 0.00}, ]4. Knowledge Module (K)Constructing the Subjective Knowledge NetworkpythonCopy codeclass SubjectiveKnowledgeNetwork:    def __init__(self):         self.nodes = {}         self.edges = []    def add_node(self, data_point):        if data_point['id'] not in self.nodes:             self.nodes[data_point['id']] = data_point    def add_edge(self, data_point_i, data_point_j, attribute, difference):         edge = {            'node_i': data_point_i['id'],            'node_j': data_point_j['id'],            'attribute': attribute,            'difference': difference,         }         self.edges.append(edge)    def get_network(self):        return {'nodes': self.nodes, 'edges': self.edges}# Initialize the Subjective Knowledge NetworkK_subj = SubjectiveKnowledgeNetwork()# Populate the networkfor info in I_subj:     data_point_i = next(dp for dp in D_subj if dp['id'] == info['data_point_i'])     data_point_j = next(dp for dp in D_subj if dp['id'] == info['data_point_j'])     K_subj.add_node(data_point_i)     K_subj.add_node(data_point_j)     K_subj.add_edge(data_point_i, data_point_j, info['attribute'], info['difference'])

Subjective Knowledge Network (K_subj):

  • Nodes:

    pythonCopy code{    1: {'id': 1, 'color': 0.90, 'size': 0.85},    2: {'id': 2, 'color': 0.30, 'size': 0.85},    3: {'id': 3, 'color': 0.10, 'size': 0.85}, }
  • Edges:

    pythonCopy code[     {'node_i': 1, 'node_j': 2, 'attribute': 'color', 'difference': 0.60},     {'node_i': 1, 'node_j': 3, 'attribute': 'color', 'difference': 0.80},     {'node_i': 2, 'node_j': 3, 'attribute': 'color', 'difference': 0.20},     {'node_i': 1, 'node_j': 2, 'attribute': 'size', 'difference': 0.00},     {'node_i': 1, 'node_j': 3, 'attribute': 'size', 'difference': 0.00},     {'node_i': 2, 'node_j': 3, 'attribute': 'size', 'difference': 0.00}, ]
5. Wisdom Module (W)Making Decisions Based on Hypothesized KnowledgepythonCopy codedef subjective_decision_making(K_subj, threshold):     network = K_subj.get_network()     nodes = network['nodes']     edges = network['edges']     decision_output = []    for edge in edges:         node_i = nodes[edge['node_i']]         node_j = nodes[edge['node_j']]         attribute = edge['attribute']         difference = edge['difference']        if difference is not None and difference < threshold:             decision_output.append(                f"Group nodes {node_i['id']} and {node_j['id']} together based on similar {attribute}."             )        else:             decision_output.append(                f"Do not group nodes {node_i['id']} and {node_j['id']} based on {attribute}."             )    return decision_output# Apply decision-making with a threshold for groupingthreshold = 0.25decision_output = subjective_decision_making(K_subj, threshold)

Decision Output:

pythonCopy code[    'Do not group nodes 1 and 2 based on color.',    'Do not group nodes 1 and 3 based on color.',    'Group nodes 2 and 3 together based on similar color.',    'Group nodes 1 and 2 together based on similar size.',    'Group nodes 1 and 3 together based on similar size.',    'Group nodes 2 and 3 together based on similar size.', ]6. Purpose Module (P)Aligning Decisions with the PurposepythonCopy codeclass SubjectivePurposeModule:    def __init__(self, goal):         self.goal = goal    def transform_input_to_output(self, decision_output):         aligned_output = []        for decision in decision_output:            # Align decisions with the goal             if self.is_aligned_with_goal(decision):                 aligned_output.append(decision)            else:                # Adjust decision to align with the goal                 adjusted_decision = f"{decision} (Adjusted to align with goal: {self.goal})"                 aligned_output.append(adjusted_decision)        return aligned_output    def is_aligned_with_goal(self, decision):        # Simplified example: Check if the decision involves grouping         if "Group" in decision:            return True  # Assuming grouping aligns with our goal         else:            return False# Define the system's goalsystem_goal = "Optimize grouping for efficient processing"# Initialize the Purpose ModuleP_subj = SubjectivePurposeModule(system_goal)# Align decisions with the purposealigned_output = P_subj.transform_input_to_output(decision_output)

Aligned Output:

pythonCopy code[    'Do not group nodes 1 and 2 based on color. (Adjusted to align with goal: Optimize grouping for efficient processing)',    'Do not group nodes 1 and 3 based on color. (Adjusted to align with goal: Optimize grouping for efficient processing)',    'Group nodes 2 and 3 together based on similar color.',    'Group nodes 1 and 2 together based on similar size.',    'Group nodes 1 and 3 together based on similar size.',    'Group nodes 2 and 3 together based on similar size.', ]7. Validation and AnalysisObservations:
  • Hypothesis Generation: The system successfully generated hypotheses for missing 'size' attributes using the average of available sizes.

  • Data Abstraction: The data was abstracted to focus on 'color' and 'size'.

  • Subjective Differences: Differences were calculated, considering that some data was hypothesized.

  • Knowledge Network: The network was built, capturing relationships based on both actual and hypothesized data.

  • Decision-Making: Decisions were made to group or not group nodes based on a defined threshold.

  • Purpose Alignment: Decisions were adjusted to align with the goal of optimizing grouping.

Validation Metrics:
  • Handling Incomplete Data: Successfully generated plausible hypotheses to fill in missing data.

  • Decision Effectiveness: Made logical decisions given the incomplete information.

  • Purpose Alignment: Adjusted decisions to align with the defined goal.

Test Scenario 2: Imprecise Data1. Input DatapythonCopy code# Data points with imprecise 'color' valuesdata_points = [     {'id': 1, 'color': 0.90, 'size': 0.8},  # Red apple     {'id': 2, 'color': 0.85, 'size': 0.75},  # Reddish apple     {'id': 3, 'color': 0.10, 'size': 0.85},  # Yellow apple]2. Data Module (D)Data Abstraction

Since the data is imprecise, we'll abstract it by categorizing 'color' into broader categories.

pythonCopy codedef categorize_color(value):    if value >= 0.80:        return 'Red'     elif 0.30 <= value < 0.80:        return 'Green'     else:        return 'Yellow'for data_point in data_points:     data_point['color_category'] = categorize_color(data_point['color']) key_attributes = ['id', 'color_category', 'size'] D_subj = [abstract_data(dp, key_attributes) for dp in data_points]

Abstracted Data Set (D_subj):

pythonCopy code[     {'id': 1, 'color_category': 'Red', 'size': 0.8},     {'id': 2, 'color_category': 'Red', 'size': 0.75},     {'id': 3, 'color_category': 'Yellow', 'size': 0.85}, ]3. Information Module (I)Extracting Subjective DifferencespythonCopy codedef Delta_subj_categorical(data_point_i, data_point_j, attribute_key):    # Calculate difference for categorical data     if data_point_i[attribute_key] == data_point_j[attribute_key]:         difference = 0  # No difference     else:         difference = 1  # Different categories     return difference I_subj = [] attributes = ['color_category', 'size']for attr in attributes:    for i in range(len(D_subj)):        for j in range(i + 1, len(D_subj)):             data_point_i = D_subj[i]             data_point_j = D_subj[j]            if attr == 'color_category':                 difference = Delta_subj_categorical(data_point_i, data_point_j, attr)            else:                 difference = Delta_subj(data_point_i, data_point_j, attr)             I_subj.append({                'data_point_i': data_point_i['id'],                'data_point_j': data_point_j['id'],                'attribute': attr,                'difference': difference,             })

Subjective Information Set (I_subj):

pythonCopy code[     {'data_point_i': 1, 'data_point_j': 2, 'attribute': 'color_category', 'difference': 0},     {'data_point_i': 1, 'data_point_j': 3, 'attribute': 'color_category', 'difference': 1},     {'data_point_i': 2, 'data_point_j': 3, 'attribute': 'color_category', 'difference': 1},    # Differences in 'size' as before]4. Knowledge Module (K)

Construct the knowledge network as in the previous scenario, incorporating the new 'color_category' attribute.

5. Wisdom Module (W)

Apply decision-making considering the imprecision in 'color' has been abstracted.

Decision Output:

Decisions will be based on the categorical differences and the 'size' differences.

6. Purpose Module (P)

Align decisions with the goal of effectively handling imprecise data.

Aligned Output:

Decisions should reflect grouping based on the broader 'color_category' to compensate for imprecision.

7. Validation and AnalysisObservations:
  • Handling Imprecise Data: Successfully abstracted imprecise 'color' values into broader categories.

  • Decision Effectiveness: Decisions made were logical, grouping items with similar 'color_category'.

  • Purpose Alignment: Decisions aligned with handling imprecision effectively.

Test Scenario 3: Inconsistent Data1. Input DatapythonCopy code# Data points with inconsistent 'size' for the same 'color'data_points = [     {'id': 1, 'color': 0.90, 'size': 0.8},  # Red apple     {'id': 2, 'color': 0.90, 'size': 0.7},  # Red apple, different size     {'id': 3, 'color': 0.30, 'size': 0.85},  # Green apple]2. Data Module (D)

No need for hypothesis generation, but we need to acknowledge inconsistencies.

3. Information Module (I)

Extract differences, noting inconsistencies.

4. Knowledge Module (K)

Build the knowledge network, highlighting inconsistent relationships.

5. Wisdom Module (W)Making Decisions that Reconcile InconsistenciespythonCopy codedef reconcile_inconsistencies(decision_output):     reconciled_output = []     size_differences = {}    for decision in decision_output:        # Logic to reconcile inconsistencies         reconciled_output.append(decision)    return reconciled_output# Apply decision-makingdecision_output = subjective_decision_making(K_subj, threshold=0.15)# Reconcile inconsistenciesdecision_output = reconcile_inconsistencies(decision_output)6. Purpose Module (P)

Align decisions with the goal of optimizing consistency handling.

7. Validation and AnalysisObservations:
  • Handling Inconsistencies: The system identified and highlighted inconsistencies in 'size' for the same 'color'.

  • Decision Effectiveness: Decisions aimed at reconciling inconsistencies, possibly by averaging sizes or flagging data for review.

  • Purpose Alignment: Decisions were aligned with the goal of managing inconsistencies.

Overall Conclusions
  • The system successfully handled incomplete, imprecise, and inconsistent data by generating hypotheses, abstracting data, and making informed decisions.

  • The modular approach allowed for flexibility in handling different types of data issues.

  • The alignment with the defined purpose ensured that all decisions contributed to the system's overarching goals.

  • The tests demonstrate the system's capability to simulate human-like cognition in dealing with uncertainty.

Next Steps
  • Refinement: Based on test outcomes, refine hypothesis generation methods and decision criteria to improve accuracy.

  • Scaling: Test the system with larger datasets to evaluate performance and scalability.

  • Integration: Consider integrating machine learning techniques for better hypothesis generation and decision-making.

  • Evaluation Metrics: Develop quantitative metrics to assess performance improvements over iterations.

References for Further Reading

  1. International Standardization Committee of Networked DIKWP for Artificial Intelligence Evaluation (DIKWP-SC),World Association of Artificial Consciousness(WAC),World Conference on Artificial Consciousness(WCAC)Standardization of DIKWP Semantic Mathematics of International Test and Evaluation Standards for Artificial Intelligence based on Networked Data-Information-Knowledge-Wisdom-Purpose (DIKWP ) Model. October 2024 DOI: 10.13140/RG.2.2.26233.89445 .  https://www.researchgate.net/publication/384637381_Standardization_of_DIKWP_Semantic_Mathematics_of_International_Test_and_Evaluation_Standards_for_Artificial_Intelligence_based_on_Networked_Data-Information-Knowledge-Wisdom-Purpose_DIKWP_Model

  2. Duan, Y. (2023). The Paradox of Mathematics in AI Semantics. Proposed by Prof. Yucong Duan:" As Prof. Yucong Duan proposed the Paradox of Mathematics as that current mathematics will not reach the goal of supporting real AI development since it goes with the routine of based on abstraction of real semantics but want to reach the reality of semantics. ".



https://blog.sciencenet.cn/blog-3429562-1456773.html

上一篇:DIKWP-Based White Box Evaluation(初学者版)
下一篇:Running Information Test for DIKWP Artificial Consciou(初学者版)
收藏 IP: 140.240.39.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-12-26 18:44

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部