Pārlūkot izejas kodu

Config Ref man (#1851)

* Improve config doc and remove deprecated function.

* Update config.pyi

* Apply suggestions from code review

Giang's feedback

Co-authored-by: Đỗ Trường Giang <do.giang@avaiga.com>

* Update config.pyi

* Improve config doc and remove deprecated function.

* linter

* Config Reference manual

* Update config.pyi

* Ref man

* Ref man improvements

* yet some other improvements

* yet some other improvements

* consistency

* example

* linter

* ruff

* per Fab

* per Fab

* Stack broadcasts so that not one is lost (#1882)

* Stack broadcasts so that not one is lost
resolves #1608

* fix test and typo

* test

---------

Co-authored-by: Fred Lefévère-Laoide <Fred.Lefevere-Laoide@Taipy.io>

* add on page load (#1883) (#1884)

* add on page load

* fix lint

* per Fabien

* Toggle examples (#1885)

* Toggle examples

* lint

* example builder + markdown
fix crash in datatype

* rename folder

* files rename

---------

Co-authored-by: Fred Lefévère-Laoide <Fred.Lefevere-Laoide@Taipy.io>

* Feature/#923 create taipy common (#1833)

* feat(taipy-common): add common package and move config, _cli and logger to it

* feat(taipy-common): refactor imports

* feat(taipy-common): refactor release action

* feat(taipy-common): fix templates tests

* fix(taipy-common): fix config.pyi path

Update config.pyi

* chore(taipy-common): fix missing imports on pyi_header

* chore(taipy-common): remove cli testing section on partial tests

* Update config.pyi

* chore(taipy-common): fix config import on migrate_cli

* fix: missing common when importing taipy config

* chore(taipy-common): bump dev version

* chore(taipy-common): fix package name

* chore(taipy-common): fix import format

---------

Co-authored-by: joaoandre-avaiga <joaoandre-avaiga@users.noreply.github.com>
Co-authored-by: trgiangdo <dtr.giang.1299@gmail.com>

* added inject by method

* fixed issue of import in enterprise

* rename class

* remove unnecessary inject_method fct

* fixed wrong code

* manage broadcast (#1894)

* manage broadcast
resolves #1608
resolves #1891

* should not be

---------

Co-authored-by: Fred Lefévère-Laoide <Fred.Lefevere-Laoide@Taipy.io>

* Update README.md (#1869)

Made some small change will enhance the quilty of reading

* Update contributors.txt (#1877)

* Update CODE_OF_CONDUCT.md Corrected the Grammar Part (#1888)

* merge issues

* Update config.pyi

* Apply suggestions from code review

* make the refman works with taipy.common

* Update config.pyi

* Make linter happier

* Make linter happier

* Make linter happier

---------

Co-authored-by: jrobinAV <jrobinAV@users.noreply.github.com>
Co-authored-by: Đỗ Trường Giang <do.giang@avaiga.com>
Co-authored-by: namnguyen <namnguyen20999@gmail.com>
Co-authored-by: Fred Lefévère-Laoide <90181748+FredLL-Avaiga@users.noreply.github.com>
Co-authored-by: Fred Lefévère-Laoide <Fred.Lefevere-Laoide@Taipy.io>
Co-authored-by: Dinh Long Nguyen <dinhlongviolin1@gmail.com>
Co-authored-by: João André <88906996+joaoandre-avaiga@users.noreply.github.com>
Co-authored-by: joaoandre-avaiga <joaoandre-avaiga@users.noreply.github.com>
Co-authored-by: trgiangdo <dtr.giang.1299@gmail.com>
Co-authored-by: Toan Quach <shiro@Shiros-MacBook-Pro.local>
Co-authored-by: Kushal Agrawal <98145879+kushal34712@users.noreply.github.com>
Co-authored-by: Deepanshu <110890939+DeepanshuProgrammer@users.noreply.github.com>
Jean-Robin 7 mēneši atpakaļ
vecāks
revīzija
9373005365
83 mainītis faili ar 1762 papildinājumiem un 1841 dzēšanām
  1. 2 0
      taipy/common/__init__.py
  2. 18 8
      taipy/common/config/__init__.py
  3. 4 0
      taipy/common/config/checker/__init__.py
  4. 6 7
      taipy/common/config/checker/issue.py
  5. 4 6
      taipy/common/config/checker/issue_collector.py
  6. 8 12
      taipy/common/config/common/frequency.py
  7. 3 3
      taipy/common/config/common/scope.py
  8. 25 26
      taipy/common/config/config.py
  9. 43 42
      taipy/common/config/config.pyi
  10. 9 7
      taipy/common/config/global_app/global_app_config.py
  11. 11 1
      taipy/common/config/section.py
  12. 8 9
      taipy/core/__init__.py
  13. 3 3
      taipy/core/_entity/_ready_to_run_property.py
  14. 26 26
      taipy/core/_entity/submittable.py
  15. 1 0
      taipy/core/_orchestrator/_dispatcher/_job_dispatcher.py
  16. 2 0
      taipy/core/_orchestrator/_orchestrator.py
  17. 1 0
      taipy/core/common/_mongo_connector.py
  18. 3 3
      taipy/core/common/mongo_default_document.py
  19. 89 54
      taipy/core/config/core_section.py
  20. 48 40
      taipy/core/config/data_node_config.py
  21. 26 19
      taipy/core/config/job_config.py
  22. 73 40
      taipy/core/config/scenario_config.py
  23. 33 26
      taipy/core/config/task_config.py
  24. 20 20
      taipy/core/cycle/cycle.py
  25. 1 0
      taipy/core/cycle/cycle_id.py
  26. 13 12
      taipy/core/data/_abstract_sql.py
  27. 37 37
      taipy/core/data/_file_datanode_mixin.py
  28. 2 2
      taipy/core/data/_tabular_datanode_mixin.py
  29. 17 35
      taipy/core/data/aws_s3.py
  30. 20 40
      taipy/core/data/csv.py
  31. 186 182
      taipy/core/data/data_node.py
  32. 2 0
      taipy/core/data/data_node_id.py
  33. 30 49
      taipy/core/data/excel.py
  34. 12 28
      taipy/core/data/generic.py
  35. 7 27
      taipy/core/data/in_memory.py
  36. 14 33
      taipy/core/data/json.py
  37. 26 42
      taipy/core/data/mongo.py
  38. 1 3
      taipy/core/data/operator.py
  39. 64 80
      taipy/core/data/parquet.py
  40. 7 27
      taipy/core/data/pickle.py
  41. 28 44
      taipy/core/data/sql.py
  42. 22 40
      taipy/core/data/sql_table.py
  43. 2 1
      taipy/core/exceptions/__init__.py
  44. 82 82
      taipy/core/job/job.py
  45. 1 0
      taipy/core/job/job_id.py
  46. 3 5
      taipy/core/notification/__init__.py
  47. 1 1
      taipy/core/notification/_topic.py
  48. 26 33
      taipy/core/notification/core_event_consumer.py
  49. 27 16
      taipy/core/notification/event.py
  50. 21 21
      taipy/core/notification/notifier.py
  51. 1 0
      taipy/core/notification/registration_id.py
  52. 14 14
      taipy/core/orchestrator.py
  53. 3 1
      taipy/core/reason/__init__.py
  54. 1 1
      taipy/core/reason/reason.py
  55. 3 3
      taipy/core/reason/reason_collection.py
  56. 325 301
      taipy/core/scenario/scenario.py
  57. 1 0
      taipy/core/scenario/scenario_id.py
  58. 104 74
      taipy/core/sequence/sequence.py
  59. 1 0
      taipy/core/sequence/sequence_id.py
  60. 93 60
      taipy/core/submission/submission.py
  61. 1 0
      taipy/core/submission/submission_id.py
  62. 9 9
      taipy/core/taipy.py
  63. 47 23
      taipy/core/task/task.py
  64. 1 0
      taipy/core/task/task_id.py
  65. 2 2
      taipy/gui/__init__.py
  66. 4 4
      taipy/gui/_gui_section.py
  67. 4 4
      taipy/gui/_renderers/__init__.py
  68. 3 2
      taipy/gui/builder/_element.py
  69. 2 2
      taipy/gui/builder/page.py
  70. 8 8
      taipy/gui/gui.py
  71. 5 5
      taipy/gui/gui_actions.py
  72. 3 2
      taipy/gui/icon.py
  73. 3 3
      taipy/gui/partial.py
  74. 2 2
      taipy/gui/state.py
  75. 2 2
      taipy/rest/__init__.py
  76. 0 4
      taipy/rest/api/resources/datanode.py
  77. 0 2
      taipy/rest/api/schemas/datanode.py
  78. 1 1
      tests/common/config/utils/checker_for_tests.py
  79. 0 56
      tests/core/config/test_data_node_config.py
  80. 0 50
      tests/core/config/test_task_config.py
  81. 0 12
      tests/core/data/test_data_node.py
  82. 1 1
      tests/core/data/test_write_parquet_data_node.py
  83. 0 1
      tests/rest/json/expected/datanode.json

+ 2 - 0
taipy/common/__init__.py

@@ -8,3 +8,5 @@
 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
+
+"""Common functionalities for the taipy package."""

+ 18 - 8
taipy/common/config/__init__.py

@@ -9,9 +9,7 @@
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
 
-"""# Taipy `config` Package
-
-The `taipy.common.config` package provides features to configure a Taipy application.
+""" The `taipy.common.config` package provides features to configure a Taipy application.
 
 Its main class is the `Config^` singleton. It exposes various static methods
 and attributes to configure the Taipy application and retrieve the configuration values.
@@ -50,8 +48,6 @@ and attributes to configure the Taipy application and retrieve the configuration
 from typing import List
 
 from ._init import Config
-from .checker.issue import Issue
-from .checker.issue_collector import IssueCollector
 from .common.frequency import Frequency
 from .common.scope import Scope
 from .global_app.global_app_config import GlobalAppConfig
@@ -60,20 +56,34 @@ from .unique_section import UniqueSection
 
 
 def _config_doc(func):
-    def func_with_doc(section, attribute_name, default, configuration_methods, add_to_unconflicted_sections=False):
+    def func_with_doc(section, attr_name, default, configuration_methods, add_to_unconflicted_sections=False):
         import os
 
         if os.environ.get("GENERATING_TAIPY_DOC", None) and os.environ["GENERATING_TAIPY_DOC"] == "true":
-            with open("config_doc.txt", "a") as f:
+            with (open("config_doc.txt", "a") as f):
                 from inspect import signature
 
+                # Add the documentation for configure methods
                 for exposed_configuration_method, configuration_method in configuration_methods:
                     annotation = "    @staticmethod\n"
                     sign = "    def " + exposed_configuration_method + str(signature(configuration_method)) + ":\n"
                     doc = '        """' + configuration_method.__doc__ + '"""\n'
                     content = "        pass\n\n"
                     f.write(annotation + sign + doc + content)
-        return func(section, attribute_name, default, configuration_methods, add_to_unconflicted_sections)
+
+                # Add the documentation for the attribute
+                annotation = '    @property\n'
+                sign = f"    def {attr_name} (self) -> {section.__name__}:\n"
+                if issubclass(section, UniqueSection):
+                    doc = f'        """The configured {section.__name__} section."""\n'
+                elif issubclass(section, Section):
+                    doc = f'        """The configured {section.__name__} sections ."""\n'
+                else:
+                    print(f" ERROR - Invalid section class: {section.__name__}")  # noqa: T201
+                    return
+                content = "        pass\n\n"
+                f.write(annotation + sign + doc + content)
+        return func(section, attr_name, default, configuration_methods, add_to_unconflicted_sections)
 
     return func_with_doc
 

+ 4 - 0
taipy/common/config/checker/__init__.py

@@ -8,3 +8,7 @@
 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
+""""""
+
+from .issue import Issue
+from .issue_collector import IssueCollector

+ 6 - 7
taipy/common/config/checker/issue.py

@@ -20,22 +20,21 @@ class Issue:
     `Issue` is a dataclass that represents an issue detected during the configuration check
     process. It contains the necessary information to understand the issue and help the user to fix
     it.
-
-    Attributes:
-        level (str): Level of the issue among ERROR, WARNING, INFO.
-        field (str): Configuration field on which the issue has been detected.
-        value (Any): Value of the field on which the issue has been detected.
-        message (str): Human readable message to help the user fix the issue.
-        tag (Optional[str]): Optional tag to be used to filter issues.
     """
 
     level: str
+    """Level of the issue among ERROR, WARNING, INFO."""
     field: str
+    """Configuration field on which the issue has been detected."""
     value: Any
+    """Value of the field on which the issue has been detected."""
     message: str
+    """Human readable message to help the user fix the issue."""
     tag: Optional[str]
+    """Optional tag to be used to filter issues."""
 
     def __str__(self) -> str:
+        """Return a human-readable string representation of the issue."""
         message = self.message
 
         if self.value:

+ 4 - 6
taipy/common/config/checker/issue_collector.py

@@ -22,12 +22,6 @@ class IssueCollector:
     method. It contains all the collected issues separated by severity (ERROR, WARNING, INFO).
     Each issue is an instance of the class `Issue^` and contains the necessary information to
     understand the issue and help the user to fix it.
-
-    Attributes:
-        errors (List[Issue^]): List of ERROR issues collected.
-        warnings (List[Issue^]): List WARNING issues collected.
-        infos (List[Issue^]): List INFO issues collected.
-        all (List[Issue^]): List of all issues collected ordered by decreasing level (ERROR, WARNING and INFO).
     """
 
     _ERROR_LEVEL = "ERROR"
@@ -41,18 +35,22 @@ class IssueCollector:
 
     @property
     def all(self) -> List[Issue]:
+        """List of all issues collected ordered by decreasing level (ERROR, WARNING and INFO)."""
         return self._errors + self._warnings + self._infos
 
     @property
     def infos(self) -> List[Issue]:
+        """List INFO issues collected."""
         return self._infos
 
     @property
     def warnings(self) -> List[Issue]:
+        """List WARNING issues collected."""
         return self._warnings
 
     @property
     def errors(self) -> List[Issue]:
+        """List of ERROR issues collected."""
         return self._errors
 
     def _add_error(self, field: str, value: Any, message: str, checker_name: str) -> None:

+ 8 - 12
taipy/common/config/common/frequency.py

@@ -15,6 +15,14 @@ from ..common._repr_enum import _ReprEnum
 class Frequency(_ReprEnum):
     """Frequency of the recurrence of `Cycle^` and `Scenario^` objects.
 
+    This enumeration can have the following values:
+
+    - `DAILY`: Daily frequency, a new cycle is created for each day.
+    - `WEEKLY`: Weekly frequency, a new cycle is created for each week (from Monday to Sunday).
+    - `MONTHLY`: Monthly frequency, a new cycle is created for each month.
+    - `QUARTERLY`: Quarterly frequency, a new cycle is created for each quarter.
+    - `YEARLY`: Yearly frequency, a new cycle is created for each year.
+
     The frequency must be provided in the `ScenarioConfig^`.
 
     Each recurrent scenario is attached to the cycle corresponding to the creation date and the
@@ -24,18 +32,6 @@ class Frequency(_ReprEnum):
     For instance, when scenarios have a _MONTHLY_ frequency, one cycle will be created for each
     month (January, February, March, etc.). A new scenario created on February 10th, gets
     attached to the _February_ cycle.
-
-    The frequency is implemented as an enumeration with the following possible values:
-
-    - With a _DAILY_ frequency, a new cycle is created for each day.
-
-    - With a _WEEKLY_ frequency, a new cycle is created for each week (from Monday to Sunday).
-
-    - With a _MONTHLY_ frequency, a new cycle is created for each month.
-
-    - With a _QUARTERLY_ frequency, a new cycle is created for each quarter.
-
-    - With a _YEARLY_ frequency, a new cycle is created for each year.
     """
 
     DAILY = 1

+ 3 - 3
taipy/common/config/common/scope.py

@@ -39,9 +39,9 @@ class Scope(_OrderedEnum):
 
     This enumeration can have the following values:
 
-    - `GLOBAL`
-    - `CYCLE`
-    - `SCENARIO` (Default value)
+    - `GLOBAL`: Global scope, the data node is shared by all the scenarios.
+    - `CYCLE`: Cycle scope, the data node is shared by all the scenarios of the same cycle.
+    - `SCENARIO` (Default value): Scenario scope, the data node is unique to a scenario.
 
     Each data node config has a scope. It is an attribute propagated to the `DataNode^`
     when instantiated from a `DataNodeConfig^`. The scope is used to determine the

+ 25 - 26
taipy/common/config/config.py

@@ -56,9 +56,9 @@ class Config:
         ??? example "Advanced use case"
 
             The configuration can be done in three ways: Python code, configuration files, or
-            environment variables. All configuration manners are ultimately merged (overriding the previous way
+            environment variables. All configuration manners are ultimately merged (overriding the previous
             way) to create a final applied configuration. Please refer to the
-            [advanced configuration](../../userman/advanced_features/configuration/advanced-config.md)
+            [advanced configuration](../../../../../../userman/advanced_features/configuration/advanced-config.md)
             section from the user manual for more details.
 
     2. Attributes and methods to retrieve the configuration values.
@@ -103,12 +103,6 @@ class Config:
                 file and replace the current Python configuration.
             - *Override the configuration*: Use the `Config.override()^` method to load a TOML
                 configuration file and override the current Python configuration.
-
-    Attributes:
-        global_config (GlobalAppConfig): configuration values related to the global
-            application as a `GlobalAppConfig^`.
-        unique_sections (Dict[str, UniqueSection]): A dictionary containing all unique sections.
-        sections (Dict[str, Dict[str, Section]]): A dictionary containing all non-unique sections.
     """
 
     _ENVIRONMENT_VARIABLE_NAME_WITH_CONFIG_PATH = "TAIPY_CONFIG_PATH"
@@ -125,19 +119,22 @@ class Config:
 
     @_Classproperty
     def unique_sections(cls) -> Dict[str, UniqueSection]:
+        """A dictionary containing all unique sections."""
         return cls._applied_config._unique_sections
 
     @_Classproperty
     def sections(cls) -> Dict[str, Dict[str, Section]]:
+        """A dictionary containing all non-unique sections."""
         return cls._applied_config._sections
 
     @_Classproperty
     def global_config(cls) -> GlobalAppConfig:
+        """configuration values related to the global application as a `GlobalAppConfig^`."""
         return cls._applied_config._global_config
 
     @classmethod
     @_ConfigBlocker._check()
-    def load(cls, filename):
+    def load(cls, filename: str) -> None:
         """Load a configuration file.
 
         The current Python configuration is replaced and the Config compilation is triggered.
@@ -151,22 +148,22 @@ class Config:
         cls.__logger.info(f"Configuration '{filename}' successfully loaded.")
 
     @classmethod
-    def export(cls, filename):
+    def export(cls, filename: str) -> None:
         """Export a configuration.
 
-        The export is done in a toml file.
+        The export is done in a toml file. The exported configuration is taken
+        from the Python code configuration.
 
-        The exported configuration is taken from the Python code configuration.
+        Note:
+            If *filename* already exists, it is overwritten.
 
         Parameters:
             filename (Union[str, Path]): The path of the file to export.
-        Note:
-            If *filename* already exists, it is overwritten.
         """
         cls._serializer._write(cls._python_config, filename)
 
     @classmethod
-    def backup(cls, filename):
+    def backup(cls, filename: str) -> None:
         """Backup a configuration.
 
         The backup is done in a toml file.
@@ -175,16 +172,17 @@ class Config:
         the application: the Python code configuration, the file configuration and the environment
         configuration.
 
-        Parameters:
-            filename (Union[str, Path]): The path of the file to export.
         Note:
             If *filename* already exists, it is overwritten.
+
+        Parameters:
+            filename (Union[str, Path]): The path of the file to export.
         """
         cls._serializer._write(cls._applied_config, filename)
 
     @classmethod
     @_ConfigBlocker._check()
-    def restore(cls, filename):
+    def restore(cls, filename: str) -> None:
         """Restore a configuration file and replace the current applied configuration.
 
         Parameters:
@@ -196,7 +194,7 @@ class Config:
 
     @classmethod
     @_ConfigBlocker._check()
-    def override(cls, filename):
+    def override(cls, filename: str) -> None:
         """Load a configuration from a file and overrides the current config.
 
         Parameters:
@@ -209,12 +207,12 @@ class Config:
         cls.__logger.info(f"Configuration '{filename}' successfully loaded.")
 
     @classmethod
-    def block_update(cls):
+    def block_update(cls) -> None:
         """Block update on the configuration signgleton."""
         _ConfigBlocker._block()
 
     @classmethod
-    def unblock_update(cls):
+    def unblock_update(cls) -> None:
         """Unblock update on the configuration signgleton."""
         _ConfigBlocker._unblock()
 
@@ -225,6 +223,7 @@ class Config:
 
         Parameters:
             **properties (Dict[str, Any]): A dictionary of additional properties.
+
         Returns:
             The global application configuration.
         """
@@ -255,7 +254,7 @@ class Config:
 
     @classmethod
     @_ConfigBlocker._check()
-    def _register_default(cls, default_section: Section):
+    def _register_default(cls, default_section: Section) -> None:
         if isinstance(default_section, UniqueSection):
             if cls._default_config._unique_sections.get(default_section.name, None):
                 cls._default_config._unique_sections[default_section.name]._update(default_section._to_dict())
@@ -271,7 +270,7 @@ class Config:
 
     @classmethod
     @_ConfigBlocker._check()
-    def _register(cls, section):
+    def _register(cls, section) -> None:
         if isinstance(section, UniqueSection):
             if cls._python_config._unique_sections.get(section.name, None):
                 cls._python_config._unique_sections[section.name]._update(section._to_dict())
@@ -289,7 +288,7 @@ class Config:
         cls._compile_configs()
 
     @classmethod
-    def _override_env_file(cls):
+    def _override_env_file(cls) -> None:
         if cfg_filename := os.environ.get(cls._ENVIRONMENT_VARIABLE_NAME_WITH_CONFIG_PATH):
             if not os.path.exists(cfg_filename):
                 cls.__logger.error(
@@ -303,7 +302,7 @@ class Config:
             cls.__logger.info(f"Configuration '{cfg_filename}' successfully loaded.")
 
     @classmethod
-    def _compile_configs(cls):
+    def _compile_configs(cls) -> None:
         Config._override_env_file()
         cls._applied_config._clean()
         if cls._default_config:
@@ -316,7 +315,7 @@ class Config:
             cls._applied_config._update(cls._env_file_config)
 
     @classmethod
-    def __log_message(cls, config):
+    def __log_message(cls, config) -> None:
         for issue in config._collector._warnings:
             cls.__logger.warning(str(issue))
         for issue in config._collector._infos:

+ 43 - 42
taipy/common/config/config.pyi

@@ -54,9 +54,9 @@ class Config:
         ??? example "Advanced use case"
 
             The configuration can be done in three ways: Python code, configuration files, or
-            environment variables. All configuration manners are ultimately merged (overriding the previous way
+            environment variables. All configuration manners are ultimately merged (overriding the previous
             way) to create a final applied configuration. Please refer to the
-            [advanced configuration](../../userman/advanced_features/configuration/advanced-config.md)
+            [advanced configuration](../../../../../../userman/advanced_features/configuration/advanced-config.md)
             section from the user manual for more details.
 
     2. Attributes and methods to retrieve the configuration values.
@@ -101,28 +101,22 @@ class Config:
                 file and replace the current Python configuration.
             - *Override the configuration*: Use the `Config.override()^` method to load a TOML
                 configuration file and override the current Python configuration.
-
-    Attributes:
-        global_config (GlobalAppConfig): configuration values related to the global
-            application as a `GlobalAppConfig^`.
-        unique_sections (Dict[str, UniqueSection]): A dictionary containing all unique sections.
-        sections (Dict[str, Dict[str, Section]]): A dictionary containing all non-unique sections.
     """
     @_Classproperty
     def unique_sections(cls) -> Dict[str, UniqueSection]:
-        """"""
+        """A dictionary containing all unique sections."""
 
     @_Classproperty
     def sections(cls) -> Dict[str, Dict[str, Section]]:
-        """"""
+        """A dictionary containing all non-unique sections."""
 
     @_Classproperty
     def global_config(cls) -> GlobalAppConfig:
-        """"""
+        """configuration values related to the global application as a `GlobalAppConfig^`."""
 
     @classmethod
     @_ConfigBlocker._check()
-    def load(cls, filename):
+    def load(cls, filename: str) -> None:
         """Load a configuration file.
 
         The current Python configuration is replaced and the Config compilation is triggered.
@@ -132,21 +126,21 @@ class Config:
         """
 
     @classmethod
-    def export(cls, filename):
+    def export(cls, filename: str) -> None:
         """Export a configuration.
 
-        The export is done in a toml file.
+        The export is done in a toml file. The exported configuration is taken
+        from the Python code configuration.
 
-        The exported configuration is taken from the Python code configuration.
+        Note:
+            If *filename* already exists, it is overwritten.
 
         Parameters:
             filename (Union[str, Path]): The path of the file to export.
-        Note:
-            If *filename* already exists, it is overwritten.
         """
 
     @classmethod
-    def backup(cls, filename):
+    def backup(cls, filename: str) -> None:
         """Backup a configuration.
 
         The backup is done in a toml file.
@@ -155,15 +149,16 @@ class Config:
         the application: the Python code configuration, the file configuration and the environment
         configuration.
 
-        Parameters:
-            filename (Union[str, Path]): The path of the file to export.
         Note:
             If *filename* already exists, it is overwritten.
+
+        Parameters:
+            filename (Union[str, Path]): The path of the file to export.
         """
 
     @classmethod
     @_ConfigBlocker._check()
-    def restore(cls, filename):
+    def restore(cls, filename: str) -> None:
         """Restore a configuration file and replace the current applied configuration.
 
         Parameters:
@@ -172,7 +167,7 @@ class Config:
 
     @classmethod
     @_ConfigBlocker._check()
-    def override(cls, filename):
+    def override(cls, filename: str) -> None:
         """Load a configuration from a file and overrides the current config.
 
         Parameters:
@@ -180,11 +175,11 @@ class Config:
         """
 
     @classmethod
-    def block_update(cls):
+    def block_update(cls) -> None:
         """Block update on the configuration signgleton."""
 
     @classmethod
-    def unblock_update(cls):
+    def unblock_update(cls) -> None:
         """Unblock update on the configuration signgleton."""
 
     @classmethod
@@ -194,6 +189,7 @@ class Config:
 
         Parameters:
             **properties (Dict[str, Any]): A dictionary of additional properties.
+
         Returns:
             The global application configuration.
         """
@@ -214,20 +210,20 @@ class Config:
 
     @classmethod
     @_ConfigBlocker._check()
-    def _register_default(cls, default_section: Section):
+    def _register_default(cls, default_section: Section) -> None:
         """"""
 
     @classmethod
     @_ConfigBlocker._check()
-    def _register(cls, section):
+    def _register(cls, section) -> None:
         """"""
 
     @classmethod
-    def _override_env_file(cls):
+    def _override_env_file(cls) -> None:
         """"""
 
     @classmethod
-    def _compile_configs(cls):
+    def _compile_configs(cls) -> None:
         """"""
 
     @classmethod
@@ -353,7 +349,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -406,7 +402,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -441,7 +437,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -475,10 +471,11 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             The new JSON data node configuration.
         """  # noqa: E501
@@ -520,7 +517,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -556,7 +553,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -593,10 +590,11 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             The new Generic data node configuration.
         """  # noqa: E501
@@ -623,7 +621,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -656,7 +654,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -715,7 +713,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -780,10 +778,11 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             The new SQL data node configuration.
         """  # noqa: E501
@@ -830,7 +829,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -869,7 +868,7 @@ class Config:
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -934,6 +933,7 @@ class Config:
                 The default value is False.
             **properties (dict[str, any]): A keyworded variable length list of additional
                 arguments.
+
         Returns:
             The default task configuration.
         """
@@ -992,7 +992,7 @@ class Config:
             mode (Optional[str]): Indicates the mode of the version management system.
                 Possible values are *"development"* or *"experiment"*. On Enterprise edition of Taipy,
                 *production* mode is also available. Please refer to the
-                [Versioning management](../../userman/advanced_features/versioning/index.md)
+                [Versioning management](../../../../../../userman/advanced_features/versioning/index.md)
                 documentation page for more details.
             version_number (Optional[str]): The string identifier of the version.
                  In development mode, the version number is ignored.
@@ -1000,6 +1000,7 @@ class Config:
                 has changed and run the application.
             **properties (Dict[str, Any]): A keyworded variable length list of additional arguments configure the
                 behavior of the `Orchestrator^` service.
+
         Returns:
             The Core configuration.
         """

+ 9 - 7
taipy/common/config/global_app/global_app_config.py

@@ -18,22 +18,19 @@ from ..common._template_handler import _TemplateHandler as _tpl
 
 
 class GlobalAppConfig:
-    """Configuration attributes related to the global application.
-
-    Attributes:
-        **properties (Dict[str, Any]): A dictionary of additional properties.
-    """
+    """Configuration attributes related to the global application."""
 
     def __init__(self, **properties):
         self._properties = properties
 
     @property
-    def properties(self):
+    def properties(self) -> Dict[str, Any]:
+        """A dictionary of additional properties."""
         return {k: _tpl._replace_templates(v) for k, v in self._properties.items()}
 
     @properties.setter  # type: ignore
     @_ConfigBlocker._check()
-    def properties(self, val):
+    def properties(self, val) -> None:
         self._properties = val
 
     def __getattr__(self, item: str) -> Optional[Any]:
@@ -41,6 +38,11 @@ class GlobalAppConfig:
 
     @classmethod
     def default_config(cls) -> GlobalAppConfig:
+        """Return the GlobalAppConfig section used by default.
+
+        Returns:
+            The default configuration.
+        """
         return GlobalAppConfig()
 
     def _clean(self):

+ 11 - 1
taipy/common/config/section.py

@@ -37,7 +37,7 @@ class Section:
     can exist. They are subclasses of the `UniqueSection^` abstract class such as:
 
     - `GlobalAppConfig^` for configuring global application settings.
-    - `GuiConfig` for configuring the GUI service.
+    - `_GuiConfig` for configuring the GUI service.
     - `CoreSection^` for configuring the core package behavior.
     - `JobConfig^` for configuring the job orchestration.
     - `AuthenticationConfig^` for configuring authentication settings.
@@ -47,6 +47,9 @@ class Section:
     _DEFAULT_KEY = "default"
     _ID_KEY = "id"
 
+    id: str
+    """A valid python identifier that uniquely identifies the section."""
+
     def __init__(self, id, **properties):
         self.id = _validate_id(id)
         self._properties = properties or {}
@@ -58,6 +61,12 @@ class Section:
     @property
     @abstractmethod
     def name(self):
+        """The name of the section.
+
+        This property is used to identify the section in the configuration. It is used as a key in the
+        dictionary of sections in the `Config^` class.
+        Note also that the name of the section is exposed as a `Config^` property.
+        """
         raise NotImplementedError
 
     @abstractmethod
@@ -82,6 +91,7 @@ class Section:
 
     @property
     def properties(self):
+        """A dictionary of additional properties."""
         return {k: _tpl._replace_templates(v) for k, v in self._properties.items()}
 
     @properties.setter  # type: ignore

+ 8 - 9
taipy/core/__init__.py

@@ -9,11 +9,10 @@
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
 
-"""# Taipy `core` Package
-
-The Taipy `core` package is a Python library designed to build powerful, customized, data-driven
-back-end applications. It provides the tools to help Python developers transform their algorithms
-into a complete back-end application. In particular, it helps with:
+"""
+The Taipy `core` package provides powerful, customized, data-driven back-end functionalities.
+It provides the tools to help data scientists and Python developers transform their algorithms
+into a complete back-end application. In  particular, it helps with:
 
 - Data Integration
 - Task Orchestration
@@ -25,9 +24,9 @@ More details on the Core functionalities are available in the user manual.
 To use such functionalities, the first step consists of setting up the Taipy configuration to design
 your application's characteristics and behaviors. Use the `Config^` singleton class (from `taipy.common.config`)
 to configure your application. Please refer to the
-[data nodes](../../../userman/scenario_features/data-integration/data-node-config.md),
-[tasks](../../../userman/scenario_features/task-orchestration/scenario-config.md),
-and [scenarios](../../../userman/scenario_features/sdm/scenario/scenario-config.md) pages.
+[data nodes](../../../../userman/scenario_features/data-integration/data-node-config.md),
+[tasks](../../../../userman/scenario_features/task-orchestration/scenario-config.md),
+and [scenarios](../../../../userman/scenario_features/sdm/scenario/scenario-config.md) pages.
 
 Once your application is configured, import module `import taipy as tp` so you can use any function described
 in the following section on [Function](#functions). In particular, the most used functions are `tp.create_scenario()`,
@@ -40,7 +39,7 @@ in the following section on [Function](#functions). In particular, the most used
     executor for their execution.
 
     In particular, this `Orchestrator^` service is automatically run when used with Taipy REST or Taipy GUI.
-    See the [running services](../../../userman/run-deploy/run/running_services.md) page of the user
+    See the [running services](../../../../userman/run-deploy/run/running_services.md) page of the user
     manual for more details.
 """
 

+ 3 - 3
taipy/core/_entity/_ready_to_run_property.py

@@ -41,13 +41,13 @@ class _ReadyToRunProperty:
 
         for scenario_parent in parent_entities.get(Scenario._MANAGER_NAME, []):
             if dn in scenario_parent.get_inputs():
-                cls.__add(scenario_parent, dn, reason)
+                cls.__add(scenario_parent, dn, reason)  # type: ignore
         for sequence_parent in parent_entities.get(Sequence._MANAGER_NAME, []):
             if dn in sequence_parent.get_inputs():
-                cls.__add(sequence_parent, dn, reason)
+                cls.__add(sequence_parent, dn, reason)  # type: ignore
         for task_parent in parent_entities.get(Task._MANAGER_NAME, []):
             if dn in task_parent.input.values():
-                cls.__add(task_parent, dn, reason)
+                cls.__add(task_parent, dn, reason)  # type: ignore
 
     @classmethod
     def _remove(cls, datanode: "DataNode", reason: Reason) -> None:

+ 26 - 26
taipy/core/_entity/submittable.py

@@ -29,27 +29,14 @@ class Submittable:
     """Instance of an entity that can be submitted for execution.
 
     A submittable holds functions that can be used to build the execution directed acyclic graph.
-
-    Attributes:
-        subscribers (List[Callable]): The list of callbacks to be called on `Job^`'s status change.
     """
 
     def __init__(self, submittable_id: str, subscribers: Optional[List[_Subscriber]] = None) -> None:
         self._submittable_id = submittable_id
         self._subscribers = _ListAttributes(self, subscribers or [])
 
-    @abc.abstractmethod
-    def submit(
-        self,
-        callbacks: Optional[List[Callable]] = None,
-        force: bool = False,
-        wait: bool = False,
-        timeout: Optional[Union[float, int]] = None,
-    ) -> Submission:
-        raise NotImplementedError
-
     def get_inputs(self) -> Set[DataNode]:
-        """Return the set of input data nodes of the submittable entity.
+        """Return the set of input data nodes of this submittable.
 
         Returns:
             The set of input data nodes.
@@ -57,9 +44,6 @@ class Submittable:
         dag = self._build_dag()
         return self.__get_inputs(dag)
 
-    def __get_inputs(self, dag: nx.DiGraph) -> Set[DataNode]:
-        return {node for node, degree in dict(dag.in_degree).items() if degree == 0 and isinstance(node, DataNode)}
-
     def get_outputs(self) -> Set[DataNode]:
         """Return the set of output data nodes of the submittable entity.
 
@@ -69,9 +53,6 @@ class Submittable:
         dag = self._build_dag()
         return self.__get_outputs(dag)
 
-    def __get_outputs(self, dag: nx.DiGraph) -> set[DataNode]:
-        return {node for node, degree in dict(dag.out_degree).items() if degree == 0 and isinstance(node, DataNode)}
-
     def get_intermediate(self) -> Set[DataNode]:
         """Return the set of intermediate data nodes of the submittable entity.
 
@@ -87,7 +68,8 @@ class Submittable:
 
         Returns:
             A ReasonCollection object that can function as a Boolean value,
-            which is True if the given entity is ready to be run or there is no reason to be blocked, False otherwise.
+                which is True if the given entity is ready to be run or there is
+                no reason to be blocked, False otherwise.
         """
         reason_collection = ReasonCollection()
 
@@ -100,7 +82,7 @@ class Submittable:
         return reason_collection
 
     def data_nodes_being_edited(self) -> Set[DataNode]:
-        """Return the set of data nodes of the submittable entity that are being edited.
+        """Return the set of data nodes that are being edited.
 
         Returns:
             The set of data nodes that are being edited.
@@ -109,17 +91,35 @@ class Submittable:
         return {node for node in dag.nodes if isinstance(node, DataNode) and node.edit_in_progress}
 
     @abc.abstractmethod
-    def subscribe(self, callback: Callable[[Submittable, Job], None], params: Optional[List[Any]] = None):
+    def submit(
+        self,
+        callbacks: Optional[List[Callable]] = None,
+        force: bool = False,
+        wait: bool = False,
+        timeout: Optional[Union[float, int]] = None,
+    ) -> Submission:
         raise NotImplementedError
 
     @abc.abstractmethod
-    def unsubscribe(self, callback: Callable[[Submittable, Job], None], params: Optional[List[Any]] = None):
+    def subscribe(self, callback: Callable[[Submittable, Job], None], params: Optional[List[Any]] = None) -> None:
+        raise NotImplementedError
+
+    @abc.abstractmethod
+    def unsubscribe(self, callback: Callable[[Submittable, Job], None], params: Optional[List[Any]] = None) -> None:
         raise NotImplementedError
 
     @abc.abstractmethod
     def _get_set_of_tasks(self) -> Set[Task]:
         raise NotImplementedError
 
+    @staticmethod
+    def __get_inputs(dag: nx.DiGraph) -> Set[DataNode]:
+        return {node for node, degree in dict(dag.in_degree).items() if degree == 0 and isinstance(node, DataNode)}
+
+    @staticmethod
+    def __get_outputs(dag: nx.DiGraph) -> set[DataNode]:
+        return {node for node, degree in dict(dag.out_degree).items() if degree == 0 and isinstance(node, DataNode)}
+
     def _get_dag(self) -> _DAG:
         return _DAG(self._build_dag())
 
@@ -143,11 +143,11 @@ class Submittable:
         dag.remove_nodes_from(remove)
         return [nodes for nodes in nx.topological_generations(dag) if (Task in (type(node) for node in nodes))]
 
-    def _add_subscriber(self, callback: Callable, params: Optional[List[Any]] = None):
+    def _add_subscriber(self, callback: Callable, params: Optional[List[Any]] = None) -> None:
         params = [] if params is None else params
         self._subscribers.append(_Subscriber(callback=callback, params=params))
 
-    def _remove_subscriber(self, callback: Callable, params: Optional[List[Any]] = None):
+    def _remove_subscriber(self, callback: Callable, params: Optional[List[Any]] = None) -> None:
         if params is not None:
             self._subscribers.remove(_Subscriber(callback, params))
         elif elem := [x for x in self._subscribers if x.callback == callback]:

+ 1 - 0
taipy/core/_orchestrator/_dispatcher/_job_dispatcher.py

@@ -119,6 +119,7 @@ class _JobDispatcher(threading.Thread):
 
         Parameters:
              task (Task^): The task to run.
+
         Returns:
              True if the task needs to run. False otherwise.
         """

+ 2 - 0
taipy/core/_orchestrator/_orchestrator.py

@@ -69,6 +69,7 @@ class _Orchestrator(_AbstractOrchestrator):
                 If not provided and *wait* is True, the function waits indefinitely.
              **properties (dict[str, any]): A key worded variable length list of user additional arguments
                 that will be stored within the `Submission^`. It can be accessed via `Submission.properties^`.
+
         Returns:
             The created `Submission^` containing the information about the submission.
         """
@@ -124,6 +125,7 @@ class _Orchestrator(_AbstractOrchestrator):
                 If not provided and *wait* is True, the function waits indefinitely.
              **properties (dict[str, any]): A key worded variable length list of user additional arguments
                 that will be stored within the `Submission^`. It can be accessed via `Submission.properties^`.
+
         Returns:
             The created `Submission^` containing the information about the submission.
         """

+ 1 - 0
taipy/core/common/_mongo_connector.py

@@ -29,6 +29,7 @@ def _connect_mongodb(
         db_password (str): the database password.
         db_extra_args (frozenset): A frozenset converted from a dictionary of additional arguments to be passed into
             database connection string.
+
     Returns:
         pymongo.MongoClient
     """

+ 3 - 3
taipy/core/common/mongo_default_document.py

@@ -16,13 +16,13 @@ class MongoDefaultDocument:
     """The default class for \"custom_document\" property to configure a `MongoCollectionDataNode^`.
 
     Attributes:
-        **kwargs: Attributes of the MongoDefaultDocument object.
+        **kwargs (dict[str, Any]): Attributes of the MongoDefaultDocument object.
 
     Example:
-        - `document = MongoDefaultDocument(name="example", age=30})`
+        - `document = MongoDefaultDocument(name="example", age=30)`
         will return a MongoDefaultDocument object so that `document.name` returns `"example"`,
         and `document.age` returns `30`.
-        - `document = MongoDefaultDocument(date="12/24/2018", temperature=20})`
+        - `document = MongoDefaultDocument(date="12/24/2018", temperature=20)`
         will return a MongoDefaultDocument object so that `document.date` returns `"12/24/2018"`,
         and `document.temperature` returns `20`.
     """

+ 89 - 54
taipy/core/config/core_section.py

@@ -31,33 +31,9 @@ from ..exceptions.exceptions import ConfigCoreVersionMismatched
 
 
 class CoreSection(UniqueSection):
-    """Configuration parameters for running the `Orchestrator^` service.
-
-    Attributes:
-        root_folder (str): Path of the base folder for the taipy application. The default value is "./taipy/"
-        storage_folder (str): Folder name used to store user data. The default value is "user_data/". The default
-            path is "user_data/".
-        taipy_storage_folder (str): Folder name used to store Taipy data. The default value is ".taipy/". The default
-            path is "./taipy/".
-        repository_type (str): Type of the repository to be used to store Taipy data. The default value is
-            "filesystem".
-        repository_properties (Dict[str, Union[str, int]]): A dictionary of additional properties to be used by the
-            repository.
-        read_entity_retry (int): Number of retries to read an entity from the repository before return failure.
-            The default value is 3.
-        mode (str): The Taipy operating mode. By default, the `Orchestrator^` service runs in "development" mode.
-            Please refer to the [Versioning management](../../userman/advanced_features/versioning/index.md)
-            documentation page for more details.
-        version_number (str)): The identifier of the user application version. Please refer to the
-            [Versioning management](../../userman/advanced_features/versioning/index.md)
-            documentation page for more details.
-        force (bool): If True, force the application run even if there are some conflicts in the
-            configuration.
-        core_version (str): The Taipy Core package version.
-        **properties (dict[str, any]): A dictionary of additional properties.
-    """
-
-    name = "CORE"
+    """Configuration parameters for running the `Orchestrator^` service."""
+
+    name: str = "CORE"
 
     _ROOT_FOLDER_KEY = "root_folder"
     _DEFAULT_ROOT_FOLDER = "./taipy/"
@@ -117,7 +93,7 @@ class CoreSection(UniqueSection):
 
         super().__init__(**properties)
 
-    def __copy__(self):
+    def __copy__(self) -> "CoreSection":
         return CoreSection(
             self.root_folder,
             self.storage_folder,
@@ -133,44 +109,64 @@ class CoreSection(UniqueSection):
         )
 
     @property
-    def storage_folder(self):
+    def root_folder(self) -> str:
+        """ Path of the base folder for the taipy application.
+
+        The default value is "./taipy/".
+        """
+        return _tpl._replace_templates(self._root_folder)
+
+    @root_folder.setter  # type: ignore
+    @_ConfigBlocker._check()
+    def root_folder(self, val) -> None:
+        self._root_folder = val
+
+    @property
+    def storage_folder(self) -> str:
+        """Folder name used to store user data.
+
+        The default value is "user_data/".
+
+        It is used in conjunction with the *root_folder* attribute. That means the storage path is
+        <root_folder><storage_folder> (The default path is "./taipy/user_data/").
+        """
         return _tpl._replace_templates(self._storage_folder)
 
     @storage_folder.setter  # type: ignore
     @_ConfigBlocker._check()
-    def storage_folder(self, val):
+    def storage_folder(self, val) -> None:
         self._storage_folder = val
 
     @property
-    def taipy_storage_folder(self):
+    def taipy_storage_folder(self) -> str:
+        """Folder name used to store internal Taipy data.
+
+        The default value is ".taipy/".
+        """
         return _tpl._replace_templates(self._taipy_storage_folder)
 
     @taipy_storage_folder.setter  # type: ignore
     @_ConfigBlocker._check()
-    def taipy_storage_folder(self, val):
+    def taipy_storage_folder(self, val) -> None:
         self._taipy_storage_folder = val
 
     @property
-    def root_folder(self):
-        return _tpl._replace_templates(self._root_folder)
+    def repository_type(self) -> str:
+        """Type of the repository to be used to store Taipy data.
 
-    @root_folder.setter  # type: ignore
-    @_ConfigBlocker._check()
-    def root_folder(self, val):
-        self._root_folder = val
-
-    @property
-    def repository_type(self):
+        The default value is "filesystem".
+        """
         return _tpl._replace_templates(self._repository_type)
 
     @repository_type.setter  # type: ignore
     @_ConfigBlocker._check()
-    def repository_type(self, val):
+    def repository_type(self, val) -> None:
         self._repository_type = val
         CoreSection.__reload_repositories()
 
     @property
-    def repository_properties(self):
+    def repository_properties(self) -> Dict[str, Union[str, int]]:
+        """A dictionary of additional properties to be used by the repository."""
         return (
             {k: _tpl._replace_templates(v) for k, v in self._repository_properties.items()}
             if self._repository_properties
@@ -179,47 +175,85 @@ class CoreSection(UniqueSection):
 
     @repository_properties.setter  # type: ignore
     @_ConfigBlocker._check()
-    def repository_properties(self, val):
+    def repository_properties(self, val) -> None:
         self._repository_properties = val
 
     @property
-    def read_entity_retry(self):
+    def read_entity_retry(self) -> int:
+        """Number of retries to read an entity from the repository before return failure.
+
+        The default value is 3.
+        """
         return _tpl._replace_templates(self._read_entity_retry)
 
     @read_entity_retry.setter  # type: ignore
     @_ConfigBlocker._check()
-    def read_entity_retry(self, val):
+    def read_entity_retry(self, val) -> None:
         self._read_entity_retry = val
 
     @property
-    def mode(self):
+    def mode(self) -> str:
+        """The operating mode of Taipy.
+
+        Taipy applications are versioned. The versioning system is used to manage
+        the different versions of the user application. Depending on the
+        operating mode, Taipy will behave differently when a version of the
+        application runs. Three modes are available: "development", "experiment",
+        and "production". Please refer to the
+        [Versioning management](../../../../../../userman/advanced_features/versioning/index.md)
+        documentation page for more details.
+
+        By default, Taipy runs in "development" mode.
+        """
         return _tpl._replace_templates(self._mode)
 
     @mode.setter  # type: ignore
     @_ConfigBlocker._check()
-    def mode(self, val):
+    def mode(self, val) -> None:
         self._mode = val
 
     @property
-    def version_number(self):
+    def version_number(self) -> str:
+        """The identifier of the user application version.
+
+        Please refer to the
+        [Versioning management](../../../../../../userman/advanced_features/versioning/index.md)
+        documentation page for more details.
+        """
         return _tpl._replace_templates(self._version_number)
 
     @version_number.setter  # type: ignore
     @_ConfigBlocker._check()
-    def version_number(self, val):
+    def version_number(self, val) -> None:
         self._version_number = val
 
     @property
-    def force(self):
+    def force(self) -> bool:
+        """If True, force the run of a user application.
+
+        If the configuration of the application current version has some conflicts with
+        the configuration of the last run application, Taipy will exit. If the *force*
+        attribute is set to True, Taipy will run even if there are some conflicts.
+        """
         return _tpl._replace_templates(self._force)
 
     @force.setter  # type: ignore
     @_ConfigBlocker._check()
-    def force(self, val):
+    def force(self, val) -> None:
         self._force = val
 
+    @property
+    def core_version(self) -> str:
+        """The version of the Taipy core library."""
+        return _tpl._replace_templates(self._core_version)
+
     @classmethod
-    def default_config(cls):
+    def default_config(cls) -> "CoreSection":
+        """Return a core section with all the default values.
+
+        Returns:
+            The default core section.
+        """
         return CoreSection(
             cls._DEFAULT_ROOT_FOLDER,
             cls._DEFAULT_STORAGE_FOLDER,
@@ -374,7 +408,7 @@ class CoreSection(UniqueSection):
             mode (Optional[str]): Indicates the mode of the version management system.
                 Possible values are *"development"* or *"experiment"*. On Enterprise edition of Taipy,
                 *production* mode is also available. Please refer to the
-                [Versioning management](../../userman/advanced_features/versioning/index.md)
+                [Versioning management](../../../../../../userman/advanced_features/versioning/index.md)
                 documentation page for more details.
             version_number (Optional[str]): The string identifier of the version.
                  In development mode, the version number is ignored.
@@ -382,6 +416,7 @@ class CoreSection(UniqueSection):
                 has changed and run the application.
             **properties (Dict[str, Any]): A keyworded variable length list of additional arguments configure the
                 behavior of the `Orchestrator^` service.
+
         Returns:
             The Core configuration.
         """

+ 48 - 40
taipy/core/config/data_node_config.py

@@ -33,14 +33,6 @@ class DataNodeConfig(Section):
     needed to create an actual data node.
 
     Attributes:
-        id (str): Unique identifier of the data node config. It must be a valid Python variable name.
-        storage_type (str): Storage type of the data nodes created from the data node config. The possible values
-            are : "csv", "excel", "pickle", "sql_table", "sql", "mongo_collection", "generic", "json", "parquet",
-            "in_memory and "s3_object".
-            The default value is "pickle".
-            Note that the "in_memory" value can only be used when `JobConfig^` mode is "development".
-        scope (Optional[Scope^]): The optional `Scope^` of the data nodes instantiated from the data node config.
-            The default value is SCENARIO.
         **properties (dict[str, any]): A dictionary of additional properties.
     """
 
@@ -297,46 +289,59 @@ class DataNodeConfig(Section):
         return _tpl._replace_templates(self._properties.get(item))
 
     @property
-    def storage_type(self):
+    def storage_type(self) -> str:
+        """Storage type of the data nodes created from the data node config.
+
+        The possible values are : "csv", "excel", "pickle", "sql_table", "sql",
+        "mongo_collection", "generic", "json", "parquet", "in_memory and "s3_object".
+
+        The default value is "pickle".
+
+        Note that the "in_memory" value can only be used when `JobConfig^` mode is "development".
+        """
         return _tpl._replace_templates(self._storage_type)
 
     @storage_type.setter  # type: ignore
     @_ConfigBlocker._check()
-    def storage_type(self, val):
+    def storage_type(self, val) -> None:
         self._storage_type = val
 
     @property
-    def scope(self):
+    def scope(self) -> Scope:
+        """The `Scope^` of the data nodes instantiated from the data node config."""
         return _tpl._replace_templates(self._scope)
 
     @scope.setter  # type: ignore
     @_ConfigBlocker._check()
-    def scope(self, val):
+    def scope(self, val) -> None:
         self._scope = val
 
     @property
-    def validity_period(self):
+    def validity_period(self) -> Optional[timedelta]:
+        """ The validity period of the data nodes instantiated from the data node config.
+
+        It corresponds to the duration since the last edit date for which the data node
+        can be considered valid. Once the validity period has passed, the data node is
+        considered stale and relevant tasks that are submitted will run even if they are
+        skippable.
+
+        If the validity period is set to None (the default value), the data node is always
+        up-to-date.
+        """
         return _tpl._replace_templates(self._validity_period)
 
     @validity_period.setter  # type: ignore
     @_ConfigBlocker._check()
-    def validity_period(self, val):
+    def validity_period(self, val) -> None:
         self._validity_period = val
 
-    @property
-    def cacheable(self):
-        _warn_deprecated("cacheable", suggest="the skippable feature")
-        cacheable = self._properties.get("cacheable")
-        return _tpl._replace_templates(cacheable) if cacheable is not None else False
-
-    @cacheable.setter  # type: ignore
-    @_ConfigBlocker._check()
-    def cacheable(self, val):
-        _warn_deprecated("cacheable", suggest="the skippable feature")
-        self._properties["cacheable"] = val
-
     @classmethod
-    def default_config(cls):
+    def default_config(cls) -> "DataNodeConfig":
+        """Get a data node configuration with all the default values.
+
+        Returns:
+            The default data node configuration.
+        """
         return DataNodeConfig(
             cls._DEFAULT_KEY, cls._DEFAULT_STORAGE_TYPE, cls._DEFAULT_SCOPE, cls._DEFAULT_VALIDITY_PERIOD
         )
@@ -412,7 +417,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -473,7 +478,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -526,7 +531,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -570,10 +575,11 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             The new JSON data node configuration.
         """  # noqa: E501
@@ -625,7 +631,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -675,7 +681,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -722,10 +728,11 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             The new Generic data node configuration.
         """  # noqa: E501
@@ -762,7 +769,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -799,7 +806,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -864,7 +871,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -957,10 +964,11 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             The new SQL data node configuration.
         """  # noqa: E501
@@ -1038,7 +1046,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
@@ -1102,7 +1110,7 @@ class DataNodeConfig(Section):
             validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be
                 considered up-to-date. Once the validity period has passed, the data node is considered stale and
                 relevant tasks will run even if they are skippable (see the Task configuration
-                [page](../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
+                [page](../../../../../../userman/scenario_features/task-orchestration/scenario-config.md#from-task-configurations)
                 for more details).
                 If *validity_period* is set to None, the data node is always up-to-date.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.

+ 26 - 19
taipy/core/config/job_config.py

@@ -19,14 +19,7 @@ from taipy.common.config.unique_section import UniqueSection
 
 
 class JobConfig(UniqueSection):
-    """
-    Configuration fields related to the jobs' executions.
-
-    Parameters:
-        mode (str): The Taipy operating mode. By default, the "development" mode is set for testing and debugging the
-            executions of jobs. A "standalone" mode is also available.
-        **properties (dict[str, any]): A dictionary of additional properties.
-    """
+    """ Configuration fields related to the task orchestration and the jobs' executions."""
 
     name = "JOB"
 
@@ -37,6 +30,15 @@ class JobConfig(UniqueSection):
     _DEFAULT_MAX_NB_OF_WORKERS = 2
     _MODES = [_DEVELOPMENT_MODE, _STANDALONE_MODE]
 
+    mode: Optional[str]
+    """The task orchestration mode.
+
+    By default, the "development" mode is set for testing and debugging the
+    executions of jobs. A "standalone" mode is also available.
+
+    In the Taipy Enterprise Edition, the "cluster" mode is available.
+    """
+
     def __init__(self, mode: Optional[str] = None, **properties):
         self.mode = mode
         super().__init__(**properties)
@@ -49,8 +51,23 @@ class JobConfig(UniqueSection):
     def __getattr__(self, key: str) -> Optional[Any]:
         return _tpl._replace_templates(self._properties.get(key))  # type: ignore[union-attr]
 
+    @property
+    def is_standalone(self) -> bool:
+        """True if the config is set to standalone mode"""
+        return self.mode == self._STANDALONE_MODE
+
+    @property
+    def is_development(self) -> bool:
+        """True if the config is set to development mode"""
+        return self.mode == self._DEVELOPMENT_MODE
+
     @classmethod
-    def default_config(cls):
+    def default_config(cls) -> "JobConfig":
+        """Return a default configuration for the job execution.
+
+        Returns:
+            The default job execution configuration.
+        """
         return JobConfig(cls._DEFAULT_MODE)
 
     def _clean(self):
@@ -104,16 +121,6 @@ class JobConfig(UniqueSection):
         Config._register(section)
         return Config.unique_sections[JobConfig.name]
 
-    @property
-    def is_standalone(self) -> bool:
-        """True if the config is set to standalone mode"""
-        return self.mode == self._STANDALONE_MODE
-
-    @property
-    def is_development(self) -> bool:
-        """True if the config is set to development mode"""
-        return self.mode == self._DEVELOPMENT_MODE
-
     def _update_default_max_nb_of_workers_properties(self):
         """If the job execution mode is standalone, set the default value for the max_nb_of_workers property"""
         if self.is_standalone and "max_nb_of_workers" not in self._properties:

+ 73 - 40
taipy/core/config/scenario_config.py

@@ -25,22 +25,7 @@ from .task_config import TaskConfig
 
 
 class ScenarioConfig(Section):
-    """
-    Configuration fields needed to instantiate an actual `Scenario^`.
-
-    Attributes:
-        id (str): Identifier of the scenario config. It must be a valid Python variable name.
-        tasks (Optional[Union[TaskConfig, List[TaskConfig]]]): List of task configs.<br/>
-            The default value is None.
-        additional_data_nodes (Optional[Union[DataNodeConfig, List[DataNodeConfig]]]): <br/>
-            List of additional data node configs. The default value is None.
-        frequency (Optional[Frequency]): The frequency of the scenario's cycle. The default value is None.
-        comparators: Optional[Dict[str, Union[List[Callable], Callable]]]: Dictionary of the data node <br/>
-            config id as key and a list of Callable used to compare the data nodes as value.
-        sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequence descriptions.
-            The default value is None.
-        **properties (dict[str, any]): A dictionary of additional properties.
-    """
+    """Configuration fields needed to instantiate an actual `Scenario^`."""
 
     name = "SCENARIO"
 
@@ -51,6 +36,21 @@ class ScenarioConfig(Section):
     _SEQUENCES_KEY = "sequences"
     _COMPARATOR_KEY = "comparators"
 
+    frequency: Optional[Frequency]
+    """The frequency of the scenario's cycle. The default value is None."""
+    comparators: Dict[str, List[Callable]]
+    """The comparator functions used to compare scenarios.
+
+    The default value is None.
+
+    Each comparator function is attached to a scenario's data node configuration.
+    The key of the dictionary parameter corresponds to the data node configuration id.
+    The value is a list of functions that are applied to all the data nodes instantiated
+    from the data node configuration attached to the comparator.
+    """
+    sequences: Dict[str, List[TaskConfig]]
+    """Dictionary of sequence descriptions. The default value is None."""
+
     def __init__(
         self,
         id: str,
@@ -104,38 +104,80 @@ class ScenarioConfig(Section):
 
     @property
     def task_configs(self) -> List[TaskConfig]:
+        """List of task configurations used by this scenario configuration."""
         return self._tasks
 
     @property
     def tasks(self) -> List[TaskConfig]:
+        """List of task configurations used by this scenario configuration."""
         return self._tasks
 
     @property
     def additional_data_node_configs(self) -> List[DataNodeConfig]:
+        """List of additional data nodes used by this scenario configuration."""
         return self._additional_data_nodes
 
     @property
     def additional_data_nodes(self) -> List[DataNodeConfig]:
+        """List of additional data nodes used by this scenario configuration."""
         return self._additional_data_nodes
 
     @property
     def data_node_configs(self) -> List[DataNodeConfig]:
+        """List of all data nodes used by this scenario configuration."""
         return self.__get_all_unique_data_nodes()
 
     @property
     def data_nodes(self) -> List[DataNodeConfig]:
+        """List of all data nodes used by this scenario configuration."""
         return self.__get_all_unique_data_nodes()
 
-    def __get_all_unique_data_nodes(self) -> List[DataNodeConfig]:
-        data_node_configs = set(self._additional_data_nodes)
-        for task in self._tasks:
-            data_node_configs.update(task.inputs)
-            data_node_configs.update(task.outputs)
+    def add_comparator(self, dn_config_id: str, comparator: Callable) -> None:
+        """Add a comparator to the scenario configuration.
 
-        return list(data_node_configs)
+        Parameters:
+            dn_config_id (str): The data node configuration id to which the comparator
+                will be applied.
+            comparator (Callable): The comparator function to be added.
+        """
+        self.comparators[dn_config_id].append(comparator)
+
+    def delete_comparator(self, dn_config_id: str) -> None:
+        """Delete a comparator from the scenario configuration."""
+        if dn_config_id in self.comparators:
+            del self.comparators[dn_config_id]
+
+    def add_sequences(self, sequences: Dict[str, List[TaskConfig]]) -> None:
+        """Add sequence descriptions to the scenario configuration.
+
+        When a `Scenario^` is instantiated from this configuration, the
+        sequence descriptions are used to add new sequences to the scenario.
+
+        Parameters:
+            sequences (Dict[str, List[TaskConfig]]): Dictionary of sequence descriptions.
+        """
+        self.sequences.update(sequences)
+
+    def remove_sequences(self, sequence_names: Union[str, List[str]]) -> None:
+        """Remove sequence descriptions from the scenario configuration.
+
+        Parameters:
+            sequence_names (Union[str, List[str]]): The name of the sequence or a list
+                of sequence names.
+        """
+        if isinstance(sequence_names, List):
+            for sequence_name in sequence_names:
+                self.sequences.pop(sequence_name)
+        else:
+            self.sequences.pop(sequence_names)
 
     @classmethod
-    def default_config(cls):
+    def default_config(cls) -> "ScenarioConfig":
+        """Get a scenario configuration with all the default values.
+
+        Returns:
+            A scenario configuration with all the default values.
+        """
         return ScenarioConfig(cls._DEFAULT_KEY, [], [], None, {})
 
     def _clean(self):
@@ -182,6 +224,14 @@ class ScenarioConfig(Section):
             **as_dict,
         )
 
+    def __get_all_unique_data_nodes(self) -> List[DataNodeConfig]:
+        data_node_configs = set(self._additional_data_nodes)
+        for task in self._tasks:
+            data_node_configs.update(task.inputs)
+            data_node_configs.update(task.outputs)
+
+        return list(data_node_configs)
+
     @staticmethod
     def __get_task_configs(task_config_ids: List[str], config: Optional[_Config]):
         task_configs = set()
@@ -227,13 +277,6 @@ class ScenarioConfig(Section):
         if default_section:
             self._properties = {**default_section.properties, **self._properties}
 
-    def add_comparator(self, dn_config_id: str, comparator: Callable):
-        self.comparators[dn_config_id].append(comparator)
-
-    def delete_comparator(self, dn_config_id: str):
-        if dn_config_id in self.comparators:
-            del self.comparators[dn_config_id]
-
     @staticmethod
     def _configure(
         id: str,
@@ -330,13 +373,3 @@ class ScenarioConfig(Section):
         )
         Config._register(section)
         return Config.sections[ScenarioConfig.name][_Config.DEFAULT_KEY]
-
-    def add_sequences(self, sequences: Dict[str, List[TaskConfig]]):
-        self.sequences.update(sequences)
-
-    def remove_sequences(self, sequence_names: Union[str, List[str]]):
-        if isinstance(sequence_names, List):
-            for sequence_name in sequence_names:
-                self.sequences.pop(sequence_name)
-        else:
-            self.sequences.pop(sequence_names)

+ 33 - 26
taipy/core/config/task_config.py

@@ -17,31 +17,26 @@ from taipy.common.config._config import _Config
 from taipy.common.config.common._template_handler import _TemplateHandler as _tpl
 from taipy.common.config.section import Section
 
-from ..common._warnings import _warn_deprecated
 from .data_node_config import DataNodeConfig
 
 
 class TaskConfig(Section):
-    """
-    Configuration fields needed to instantiate an actual `Task^`.
-
-    Attributes:
-        id (str): Identifier of the task config. Must be a valid Python variable name.
-        inputs (Union[DataNodeConfig^, List[DataNodeConfig^]]): The optional list of
-            `DataNodeConfig^` inputs.<br/>
-            The default value is [].
-        outputs (Union[DataNodeConfig^, List[DataNodeConfig^]]): The optional list of
-            `DataNodeConfig^` outputs.<br/>
-            The default value is [].
-        skippable (bool): If True, indicates that the task can be skipped if no change has
-            been made on inputs.<br/>
-            The default value is False.
-        function (Callable): User function taking as inputs some parameters compatible with the
-            exposed types (*exposed_type* field) of the input data nodes and returning results
-            compatible with the exposed types (*exposed_type* field) of the outputs list.<br/>
-            The default value is None.
-        **properties (dict[str, any]): A dictionary of additional properties.
-    """
+    """Configuration fields needed to instantiate an actual `Task^`."""
+
+    # Attributes:
+    #     inputs (Union[DataNodeConfig^, List[DataNodeConfig^]]): The optional list of
+    #         `DataNodeConfig^` inputs.<br/>
+    #         The default value is [].
+    #     outputs (Union[DataNodeConfig^, List[DataNodeConfig^]]): The optional list of
+    #         `DataNodeConfig^` outputs.<br/>
+    #         The default value is [].
+    #     skippable (bool): If True, indicates that the task can be skipped if no change has
+    #         been made on inputs.<br/>
+    #         The default value is False.
+    #     function (Callable): User function taking as inputs some parameters compatible with the
+    #         exposed types (*exposed_type* field) of the input data nodes and returning results
+    #         compatible with the exposed types (*exposed_type* field) of the outputs list.<br/>
+    #         The default value is None.
 
     name = "TASK"
 
@@ -50,6 +45,11 @@ class TaskConfig(Section):
     _OUTPUT_KEY = "outputs"
     _IS_SKIPPABLE_KEY = "skippable"
 
+    function: Optional[Callable]
+    """User function taking as inputs some parameters compatible with the data type
+    (*exposed_type* field) of the input data nodes and returning results compatible with the
+    data type (*exposed_type* field) of the outputs list."""
+
     def __init__(
         self,
         id: str,
@@ -65,10 +65,6 @@ class TaskConfig(Section):
             self._inputs = []
         if outputs:
             self._outputs = [outputs] if isinstance(outputs, DataNodeConfig) else copy(outputs)
-            outputs_all_cacheable = all(output.cacheable for output in self._outputs)
-            if not skippable and outputs_all_cacheable:
-                _warn_deprecated("cacheable", suggest="the skippable feature")
-                skippable = True
         else:
             self._outputs = []
         self._skippable: bool = skippable
@@ -85,26 +81,36 @@ class TaskConfig(Section):
 
     @property
     def input_configs(self) -> List[DataNodeConfig]:
+        """The list of the input data node configurations."""
         return list(self._inputs)
 
     @property
     def inputs(self) -> List[DataNodeConfig]:
+        """The list of the input data node configurations."""
         return list(self._inputs)
 
     @property
     def output_configs(self) -> List[DataNodeConfig]:
+        """The list of the output data node configurations."""
         return list(self._outputs)
 
     @property
     def outputs(self) -> List[DataNodeConfig]:
+        """The list of the output data node configurations."""
         return list(self._outputs)
 
     @property
     def skippable(self) -> bool:
+        """Indicates if the task can be skipped if no change has been made on inputs."""
         return _tpl._replace_templates(self._skippable)
 
     @classmethod
-    def default_config(cls):
+    def default_config(cls) -> "TaskConfig":
+        """Get the default task configuration.
+
+        Returns:
+            The default task configuration.
+        """
         return TaskConfig(cls._DEFAULT_KEY, None, [], [], False)
 
     def _clean(self) -> None:
@@ -214,6 +220,7 @@ class TaskConfig(Section):
                 The default value is False.
             **properties (dict[str, any]): A keyworded variable length list of additional
                 arguments.
+
         Returns:
             The default task configuration.
         """

+ 20 - 20
taipy/core/cycle/cycle.py

@@ -33,7 +33,7 @@ class Cycle(_Entity, _Labeled):
     The data applications to solve these business problems often require modeling the
     corresponding periods (i.e., cycles).
 
-    For this purpose, a `Cycle^` represents a single iteration of such a time pattern.
+    For this purpose, a `Cycle` represents a single iteration of such a time pattern.
     Each _cycle_ has a start date and a duration. Examples of cycles are:
 
     - Monday, 2. January 2023 as a daily cycle
@@ -41,7 +41,7 @@ class Cycle(_Entity, _Labeled):
     - January 2023 as a monthly cycle
     - etc.
 
-    `Cycle^`s are created along with the `Scenario^`s that are attached to them.
+    `Cycle`s are created along with the `Scenario^`s that are attached to them.
     At its creation, a new scenario is attached to a single cycle, the one that
     matches its optional _frequency_ and its _creation_date_.
 
@@ -53,18 +53,9 @@ class Cycle(_Entity, _Labeled):
     - `Frequency.QUARTERLY`
     - `Frequency.YEARLY`
 
-    Attributes:
-        id (str): The unique identifier of the cycle.
-        frequency (Frequency^): The frequency of this cycle.
-        creation_date (datetime): The date and time of the creation of this cycle.
-        start_date (datetime): The date and time of the start of this cycle.
-        end_date (datetime): The date and time of the end of this cycle.
-        name (str): The name of this cycle.
-        properties (dict[str, Any]): A dictionary of additional properties.
-
     !!! example "Example for January cycle"
 
-        ![cycles](../img/cycles_january_colored.svg){ align=left width="250" }
+        ![cycles](../../../../img/cycles_january_colored.svg){ align=left width="250" }
 
         Let's assume an end-user publishes production orders (i.e., a production plan) every
         month. During each month (the cycle), he/she will be interested in experimenting with
@@ -82,7 +73,7 @@ class Cycle(_Entity, _Labeled):
 
     !!! example "Example for February cycle"
 
-        ![cycles](../img/cycles_colored.svg){ align=left width="250" }
+        ![cycles](../../../../img/cycles_colored.svg){ align=left width="250" }
         Now the user starts working on the February work cycle. He or she creates two
         scenarios for the February cycle (one with a low capacity assumption and one with
         a high capacity assumption). The user can then decide to elect the low capacity
@@ -103,6 +94,9 @@ class Cycle(_Entity, _Labeled):
     __SEPARATOR = "_"
     _MANAGER_NAME = "cycle"
 
+    id:CycleId
+    """The unique identifier of the cycle."""
+
     def __init__(
         self,
         frequency: Frequency,
@@ -143,7 +137,8 @@ class Cycle(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def frequency(self):
+    def frequency(self) -> Frequency:
+        """The frequency of this cycle."""
         return self._frequency
 
     @frequency.setter  # type: ignore
@@ -153,7 +148,8 @@ class Cycle(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def creation_date(self):
+    def creation_date(self) -> datetime:
+        """The date and time of the creation of this cycle."""
         return self._creation_date
 
     @creation_date.setter  # type: ignore
@@ -163,7 +159,8 @@ class Cycle(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def start_date(self):
+    def start_date(self) -> datetime:
+        """The date and time of the start of this cycle."""
         return self._start_date
 
     @start_date.setter  # type: ignore
@@ -173,7 +170,8 @@ class Cycle(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def end_date(self):
+    def end_date(self) -> datetime:
+        """The date and time of the end of this cycle."""
         return self._end_date
 
     @end_date.setter  # type: ignore
@@ -183,7 +181,8 @@ class Cycle(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def name(self):
+    def name(self) -> str:
+        """The name of this cycle."""
         return self._name
 
     @name.setter  # type: ignore
@@ -192,7 +191,8 @@ class Cycle(_Entity, _Labeled):
         self._name = val
 
     @property
-    def properties(self):
+    def properties(self) -> _Properties:
+        """A dictionary of additional properties."""
         self._properties = _Reloader()._reload(self._MANAGER_NAME, self)._properties
         return self._properties
 
@@ -214,7 +214,7 @@ class Cycle(_Entity, _Labeled):
     def __eq__(self, other):
         return isinstance(other, Cycle) and self.id == other.id
 
-    def __hash__(self):
+    def __hash__(self) -> int:
         return hash(self.id)
 
     def get_label(self) -> str:

+ 1 - 0
taipy/core/cycle/cycle_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 CycleId = NewType("CycleId", str)
+"""Type that holds a `Cycle^` identifier."""
 CycleId.__doc__ = """Type that holds a `Cycle^` identifier."""

+ 13 - 12
taipy/core/data/_abstract_sql.py

@@ -135,6 +135,19 @@ class _AbstractSQLDataNode(DataNode, _TabularDataNodeMixin):
             }
         )
 
+    def __setattr__(self, key: str, value) -> None:
+        if key in self.__ENGINE_PROPERTIES:
+            self._engine = None
+        return super().__setattr__(key, value)
+
+    def filter(self, operators: Optional[Union[List, Tuple]] = None, join_operator=JoinOperator.AND):
+        properties = self.properties
+        if properties[self._EXPOSED_TYPE_PROPERTY] == self._EXPOSED_TYPE_PANDAS:
+            return self._read_as_pandas_dataframe(operators=operators, join_operator=join_operator)
+        if properties[self._EXPOSED_TYPE_PROPERTY] == self._EXPOSED_TYPE_NUMPY:
+            return self._read_as_numpy(operators=operators, join_operator=join_operator)
+        return self._read_as(operators=operators, join_operator=join_operator)
+
     def _check_required_properties(self, properties: Dict):
         db_engine = properties.get(self.__DB_ENGINE_KEY)
         if not db_engine:
@@ -192,14 +205,6 @@ class _AbstractSQLDataNode(DataNode, _TabularDataNodeMixin):
             return "sqlite:///" + os.path.join(folder_path, f"{db_name}{file_extension}")
         raise UnknownDatabaseEngine(f"Unknown engine: {engine}")
 
-    def filter(self, operators: Optional[Union[List, Tuple]] = None, join_operator=JoinOperator.AND):
-        properties = self.properties
-        if properties[self._EXPOSED_TYPE_PROPERTY] == self._EXPOSED_TYPE_PANDAS:
-            return self._read_as_pandas_dataframe(operators=operators, join_operator=join_operator)
-        if properties[self._EXPOSED_TYPE_PROPERTY] == self._EXPOSED_TYPE_NUMPY:
-            return self._read_as_numpy(operators=operators, join_operator=join_operator)
-        return self._read_as(operators=operators, join_operator=join_operator)
-
     def _read(self):
         properties = self.properties
         if properties[self._EXPOSED_TYPE_PROPERTY] == self._EXPOSED_TYPE_PANDAS:
@@ -306,7 +311,3 @@ class _AbstractSQLDataNode(DataNode, _TabularDataNodeMixin):
     def _do_write(self, data, engine, connection) -> None:
         raise NotImplementedError
 
-    def __setattr__(self, key: str, value) -> None:
-        if key in self.__ENGINE_PROPERTIES:
-            self._engine = None
-        return super().__setattr__(key, value)

+ 37 - 37
taipy/core/data/_file_datanode_mixin.py

@@ -26,8 +26,7 @@ from .data_node_id import Edit
 
 
 class _FileDataNodeMixin(object):
-    """Mixin class designed to handle file-based data nodes
-    (CSVDataNode, ParquetDataNode, ExcelDataNode, PickleDataNode, JSONDataNode, etc.)."""
+    """Mixin class designed to handle file-based data nodes."""
 
     __EXTENSION_MAP = {"csv": "csv", "excel": "xlsx", "parquet": "parquet", "pickle": "p", "json": "json"}
 
@@ -51,52 +50,24 @@ class _FileDataNodeMixin(object):
         properties[self._IS_GENERATED_KEY] = self._is_generated
         properties[self._PATH_KEY] = self._path
 
-    def _write_default_data(self, default_value: Any):
-        if default_value is not None and not os.path.exists(self._path):
-            self._write(default_value)  # type: ignore[attr-defined]
-            self._last_edit_date = DataNode._get_last_modified_datetime(self._path) or datetime.now()
-            self._edits.append(  # type: ignore[attr-defined]
-                Edit(
-                    {
-                        "timestamp": self._last_edit_date,
-                        "editor": "TAIPY",
-                        "comment": "Default data written.",
-                    }
-                )
-            )
-
-        if not self._last_edit_date and isfile(self._path):
-            self._last_edit_date = datetime.now()
-
     @property  # type: ignore
     @_self_reload(DataNode._MANAGER_NAME)
     def is_generated(self) -> bool:
+        """Indicates if the file is generated."""
         return self._is_generated
 
     @property  # type: ignore
     @_self_reload(DataNode._MANAGER_NAME)
-    def path(self) -> Any:
+    def path(self) -> str:
+        """The path to the file data of the data node."""
         return self._path
 
     @path.setter
-    def path(self, value):
+    def path(self, value) -> None:
         self._path = value
         self.properties[self._PATH_KEY] = value
         self.properties[self._IS_GENERATED_KEY] = False
 
-    def _build_path(self, storage_type) -> str:
-        folder = f"{storage_type}s"
-        dir_path = pathlib.Path(Config.core.storage_folder) / folder
-        if not dir_path.exists():
-            dir_path.mkdir(parents=True, exist_ok=True)
-        return str(dir_path / f"{self.id}.{self.__EXTENSION_MAP.get(storage_type)}")  # type: ignore[attr-defined]
-
-    def _migrate_path(self, storage_type, old_path) -> str:
-        new_path = self._build_path(storage_type)
-        if os.path.exists(old_path):
-            shutil.move(old_path, new_path)
-        return new_path
-
     def is_downloadable(self) -> ReasonCollection:
         """Check if the data node is downloadable.
 
@@ -111,12 +82,11 @@ class _FileDataNodeMixin(object):
         return collection
 
     def is_uploadable(self) -> ReasonCollection:
-        """Check if the data node is uploadable.
+        """Check if the data node is upload-able.
 
         Returns:
-            A `ReasonCollection^` object containing the reasons why the data node is not uploadable.
+            A `ReasonCollection^` object containing the reasons why the data node is not upload-able.
         """
-
         return ReasonCollection()
 
     def _get_downloadable_path(self) -> str:
@@ -180,3 +150,33 @@ class _FileDataNodeMixin(object):
 
     def _read_from_path(self, path: Optional[str] = None, **read_kwargs) -> Any:
         raise NotImplementedError
+
+    def _write_default_data(self, default_value: Any):
+        if default_value is not None and not os.path.exists(self._path):
+            self._write(default_value)  # type: ignore[attr-defined]
+            self._last_edit_date = DataNode._get_last_modified_datetime(self._path) or datetime.now()
+            self._edits.append(  # type: ignore[attr-defined]
+                Edit(
+                    {
+                        "timestamp": self._last_edit_date,
+                        "editor": "TAIPY",
+                        "comment": "Default data written.",
+                    }
+                )
+            )
+
+        if not self._last_edit_date and isfile(self._path):
+            self._last_edit_date = datetime.now()
+
+    def _build_path(self, storage_type) -> str:
+        folder = f"{storage_type}s"
+        dir_path = pathlib.Path(Config.core.storage_folder) / folder
+        if not dir_path.exists():
+            dir_path.mkdir(parents=True, exist_ok=True)
+        return str(dir_path / f"{self.id}.{self.__EXTENSION_MAP.get(storage_type)}")  # type: ignore[attr-defined]
+
+    def _migrate_path(self, storage_type, old_path) -> str:
+        new_path = self._build_path(storage_type)
+        if os.path.exists(old_path):
+            shutil.move(old_path, new_path)
+        return new_path

+ 2 - 2
taipy/core/data/_tabular_datanode_mixin.py

@@ -18,8 +18,7 @@ from ..exceptions.exceptions import InvalidExposedType
 
 
 class _TabularDataNodeMixin(object):
-    """Mixin class designed to handle tabular representable data nodes
-    (CSVDataNode, ParquetDataNode, ExcelDataNode, SQLTableDataNode and SQLDataNode)."""
+    """Mixin class designed to handle tabular representable data nodes."""
 
     _HAS_HEADER_PROPERTY = "has_header"
     _EXPOSED_TYPE_PROPERTY = "exposed_type"
@@ -45,6 +44,7 @@ class _TabularDataNodeMixin(object):
         if callable(custom_encoder):
             self._encoder = custom_encoder
 
+
     def _convert_data_to_dataframe(self, exposed_type: Any, data: Any) -> Union[pd.DataFrame, pd.Series]:
         if exposed_type == self._EXPOSED_TYPE_PANDAS and isinstance(data, (pd.DataFrame, pd.Series)):
             return data

+ 17 - 35
taipy/core/data/aws_s3.py

@@ -29,41 +29,22 @@ from .data_node_id import DataNodeId, Edit
 class S3ObjectDataNode(DataNode):
     """Data Node object stored in an Amazon Web Service S3 Bucket.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            None.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties. Note that the
-            _properties_ parameter must at least contain an entry for _"aws_access_key"_ , _"aws_secret_access_key"_ ,
-            _aws_s3_bucket_name_ and _aws_s3_object_key_ :
-
-            - _"aws_access_key"_ `(str)`: Amazon Web Services ID for to identify account\n
-            - _"aws_secret_access_key"_ `(str)`: Amazon Web Services access key to authenticate programmatic requests.\n
-            - _"aws_region"_ `(Any)`: Self-contained geographic area where Amazon Web Services (AWS) infrastructure is
-                    located.\n
-            - _"aws_s3_bucket_name"_ `(str)`: unique identifier for a container that stores objects in Amazon Simple
-                    Storage Service (S3).\n
-            - _"aws_s3_object_key"_ `(str)`:  unique idntifier for the name of the object(file) that has to be read
-                    or written. \n
-            - _"aws _s3_object_parameters"_ `(str)`: A dictionary of additional arguments to be passed to interact with
-                    the AWS service\n
+    The *properties* attribute must contain the following required entries:
+
+    - *aws_access_key* (`str`): Amazon Web Services ID for to identify account
+    - *aws_secret_access_key* (`str`): Amazon Web Services access key to
+        authenticate programmatic requests.
+    - *aws_s3_bucket_name*  (`str`): unique identifier for a container that stores
+        objects in Amazon Simple Storage Service (S3).
+    - *aws_s3_object_key* (`str`):  unique identifier for the name of the object (file)
+        that has to be read or written.
+
+    The *properties* attribute can also contain the following optional entries:
+
+    - *aws_region* (`Any`): Self-contained geographic area where Amazon Web Services
+        (AWS) infrastructure is located.
+    - *aws _s3_object_parameters* (`str`): A dictionary of additional arguments to be
+        passed to interact with the AWS service
     """
 
     __STORAGE_TYPE = "s3_object"
@@ -144,6 +125,7 @@ class S3ObjectDataNode(DataNode):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "s3_object"."""
         return cls.__STORAGE_TYPE
 
     def _read(self):

+ 20 - 40
taipy/core/data/csv.py

@@ -30,35 +30,15 @@ from .data_node_id import DataNodeId, Edit
 class CSVDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
     """Data Node stored as a CSV file.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. This string must be a valid
-            Python identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        path (str): The path to the CSV file.
-        properties (dict[str, Any]): A dictionary of additional properties. The _properties_
-            must have a _"default_path"_ or _"path"_ entry with the path of the CSV file:
-
-            - _"default_path"_ `(str)`: The default path of the CSV file.\n
-            - _"encoding"_ `(str)`: The encoding of the CSV file. The default value is `utf-8`.\n
-            - _"default_data"_: The default data of the data nodes instantiated from this csv data node.\n
-            - _"has_header"_ `(bool)`: If True, indicates that the CSV file has a header.\n
-            - _"exposed_type"_: The exposed type of the data read from CSV file. The default value is `pandas`.\n
+    The *properties* attribute can contain the following optional entries:
+
+    - *encoding* (`str`): The encoding of the CSV file. The default value is `utf-8`.
+    - *default_path* (`str`): The default path of the CSV file used at the instantiation of the
+        data node.
+    - *default_data*: The default data of the data node. It is used at the data node instantiation
+        to write the data to the CSV file.
+    - *has_header* (`bool`): If True, indicates that the CSV file has a header.
+    - *exposed_type*: The exposed type of the data read from CSV file. The default value is `pandas`.
     """
 
     __STORAGE_TYPE = "csv"
@@ -136,6 +116,17 @@ class CSVDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
     def storage_type(cls) -> str:
         return cls.__STORAGE_TYPE
 
+    def write_with_column_names(self, data: Any, columns: Optional[List[str]] = None, job_id: Optional[JobId] = None):
+        """Write a selection of columns.
+
+        Parameters:
+            data (Any): The data to write.
+            columns (Optional[List[str]]): The list of column names to write.
+            job_id (JobId): An optional identifier of the writer.
+        """
+        self._write(data, columns)
+        self.track_edit(timestamp=datetime.now(), job_id=job_id)
+
     def _read(self):
         return self._read_from_path()
 
@@ -202,14 +193,3 @@ class CSVDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
             encoding=properties[self.__ENCODING_KEY],
             header=properties[self._HAS_HEADER_PROPERTY],
         )
-
-    def write_with_column_names(self, data: Any, columns: Optional[List[str]] = None, job_id: Optional[JobId] = None):
-        """Write a selection of columns.
-
-        Parameters:
-            data (Any): The data to write.
-            columns (Optional[List[str]]): The list of column names to write.
-            job_id (JobId^): An optional identifier of the writer.
-        """
-        self._write(data, columns)
-        self.track_edit(timestamp=datetime.now(), job_id=job_id)

+ 186 - 182
taipy/core/data/data_node.py

@@ -28,7 +28,6 @@ from .._entity._properties import _Properties
 from .._entity._ready_to_run_property import _ReadyToRunProperty
 from .._entity._reload import _Reloader, _self_reload, _self_setter
 from .._version._version_manager_factory import _VersionManagerFactory
-from ..common._warnings import _warn_deprecated
 from ..exceptions.exceptions import DataNodeIsBeingEdited, NoData
 from ..job.job_id import JobId
 from ..notification.event import Event, EventEntityType, EventOperation, _make_event
@@ -100,37 +99,6 @@ class DataNode(_Entity, _Labeled):
             # Read the data
             print(dataset.read())
         ```
-
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        name (str): A user-readable name of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            None.
-        parent_ids (Optional[Set[str]]): The set of identifiers of the parent tasks.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The list of Edits (an alias for dict) containing metadata about each
-            data edition including but not limited to:
-                <ul><li>timestamp: The time instant of the writing </li>
-                <li>comments: Representation of a free text to explain or comment on a data change</li>
-                <li>job_id: Only populated when the data node is written by a task execution and
-                    corresponds to the job's id.</li></ul>
-            Additional metadata related to the edition made to the data node can also be provided in Edits.
-        version (str): The string indicates the application version of the data node to
-            instantiate. If not provided, the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task orchestration](../../userman/scenario_features/sdm/task/index.md#task-configuration)
-            page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if the data node is locked for modification. False
-            otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        kwargs: A dictionary of additional properties.
     """
 
     _ID_PREFIX = "DATANODE"
@@ -143,6 +111,9 @@ class DataNode(_Entity, _Labeled):
 
     _TAIPY_PROPERTIES: Set[str] = set()
 
+    id: DataNodeId
+    """The unique identifier of the data node."""
+
     def __init__(
         self,
         config_id,
@@ -176,52 +147,65 @@ class DataNode(_Entity, _Labeled):
 
         self._properties: _Properties = _Properties(self, **kwargs)
 
-    @staticmethod
-    def _new_id(config_id: str) -> DataNodeId:
-        """Generate a unique datanode identifier."""
-        return DataNodeId(
-            DataNode.__ID_SEPARATOR.join([DataNode._ID_PREFIX, _validate_id(config_id), str(uuid.uuid4())])
-        )
+    def __eq__(self, other) -> bool:
+        """Check if two data nodes are equal."""
+        return isinstance(other, DataNode) and self.id == other.id
+
+    def __ne__(self, other) -> bool:
+        """Check if two data nodes are different."""
+        return not self == other
+
+    def __hash__(self) -> int:
+        """Hash the data node."""
+        return hash(self.id)
+
+    def __getstate__(self) -> Dict[str, Any]:
+        return vars(self)
+
+    def __setstate__(self, state) -> None:
+        vars(self).update(state)
+
+    def __getitem__(self, item) -> Any:
+        data = self._read()
+        return _FilterDataNode._filter_by_key(data, item)
 
     @property
-    def config_id(self):
+    def config_id(self) -> str:
+        """Identifier of the data node configuration. It must be a valid Python identifier."""
         return self._config_id
 
     @property
-    def owner_id(self):
+    def owner_id(self) -> Optional[str]:
+        """The identifier of the owner (sequence_id, scenario_id, cycle_id) or None."""
         return self._owner_id
 
-    def get_parents(self):
-        """Get all parents of this data node."""
-        from ... import core as tp
-
-        return tp.get_parents(self)
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def parent_ids(self):
-        """List of parent ids of this data node."""
+    def parent_ids(self) -> Set[str]:
+        """The set of identifiers of the parent tasks."""
         return self._parent_ids
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def edits(self):
-        """Get all `Edit^`s of this data node."""
-        return self._edits
-
-    def get_last_edit(self) -> Optional[Edit]:
-        """Get last `Edit^` of this data node.
-
-        Returns:
-            None if there has been no `Edit^` on this data node.
+    def edits(self) -> List[Edit]:
+        """The list of Edits.
+
+        The list of Edits (an alias for dict) containing metadata about each
+        data edition including but not limited to:
+            <ul><li>timestamp: The time instant of the writing </li>
+            <li>comments: Representation of a free text to explain or comment on a data change</li>
+            <li>job_id: Only populated when the data node is written by a task execution and
+                corresponds to the job's id.</li></ul>
+        Additional metadata related to the edition made to the data node can also be provided in Edits.
         """
-        return self._edits[-1] if self._edits else None
+        return self._edits
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def last_edit_date(self):
+    def last_edit_date(self) -> Optional[datetime]:
+        """The date and time of the last modification."""
         last_modified_datetime = self._get_last_modified_datetime(self._properties.get(self._PATH_KEY, None))
-        if last_modified_datetime and last_modified_datetime > self._last_edit_date:
+        if last_modified_datetime and last_modified_datetime > self._last_edit_date: # type: ignore
             return last_modified_datetime
         else:
             return self._last_edit_date
@@ -234,7 +218,8 @@ class DataNode(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def scope(self):
+    def scope(self) -> Scope:
+        """The data node scope."""
         return self._scope
 
     @scope.setter  # type: ignore
@@ -245,6 +230,15 @@ class DataNode(_Entity, _Labeled):
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def validity_period(self) -> Optional[timedelta]:
+        """The duration since the last edit date for which the data node is considered up-to-date.
+
+        The duration implemented as a timedelta since the last edit date for which the data node
+        can be considered up-to-date. Once the validity period has passed, the data node is
+        considered stale and relevant tasks will run even if they are skippable (see the
+        Task orchestration page of the user manual for more details).
+
+        If _validity_period_ is set to `None`, the data node is always up-to-date.
+        """
         return self._validity_period if self._validity_period else None
 
     @validity_period.setter  # type: ignore
@@ -266,6 +260,7 @@ class DataNode(_Entity, _Labeled):
 
     @property  # type: ignore
     def name(self) -> Optional[str]:
+        """A human-readable name of the data node."""
         return self.properties.get("name")
 
     @name.setter  # type: ignore
@@ -273,22 +268,17 @@ class DataNode(_Entity, _Labeled):
         self.properties["name"] = val
 
     @property
-    def version(self):
-        return self._version
-
-    @property
-    def cacheable(self):
-        """Deprecated. Use `skippable` attribute of a `Task^` instead."""
-        _warn_deprecated("cacheable", suggest="the skippable feature")
-        return self.properties.get("cacheable", False)
+    def version(self) -> str:
+        """The string indicates the application version of the data node to instantiate.
 
-    @cacheable.setter
-    def cacheable(self, val):
-        _warn_deprecated("cacheable", suggest="the skippable feature")
+        If not provided, the current version is used.
+        """
+        return self._version
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def edit_in_progress(self):
+    def edit_in_progress(self) -> bool:
+        """True if the data node is locked for modification. False otherwise."""
         return self._edit_in_progress
 
     @edit_in_progress.setter  # type: ignore
@@ -299,7 +289,8 @@ class DataNode(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def editor_id(self):
+    def editor_id(self) -> Optional[str]:
+        """The identifier of the user who is currently editing the data node."""
         return self._editor_id
 
     @editor_id.setter  # type: ignore
@@ -309,7 +300,8 @@ class DataNode(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def editor_expiration_date(self):
+    def editor_expiration_date(self) -> Optional[datetime]:
+        """The expiration date of the editor lock."""
         return self._editor_expiration_date
 
     @editor_expiration_date.setter  # type: ignore
@@ -319,7 +311,7 @@ class DataNode(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def job_ids(self):
+    def job_ids(self) -> List[JobId]:
         """List of the jobs having edited this data node."""
         return [edit.get("job_id") for edit in self.edits if edit.get("job_id")]
 
@@ -329,45 +321,71 @@ class DataNode(_Entity, _Labeled):
         self._properties = _Reloader()._reload(self._MANAGER_NAME, self)._properties
         return self._properties
 
-    def _get_user_properties(self) -> Dict[str, Any]:
-        """Get user properties."""
-        return {key: value for key, value in self.properties.items() if key not in self._TAIPY_PROPERTIES}
-
-    def __eq__(self, other):
-        return isinstance(other, DataNode) and self.id == other.id
-
-    def __ne__(self, other):
-        return not self == other
-
-    def __hash__(self):
-        return hash(self.id)
+    @property  # type: ignore
+    @_self_reload(_MANAGER_NAME)
+    def is_ready_for_reading(self) -> bool:
+        """Indicate if this data node is ready for reading.
 
-    def __getstate__(self):
-        return vars(self)
+        False if the data is locked for modification or if the data has never been written.
+        True otherwise.
+        """
+        if self._edit_in_progress:
+            return False
+        if not self._last_edit_date:
+            # Never been written so it is not up-to-date
+            return False
+        return True
 
-    def __setstate__(self, state):
-        vars(self).update(state)
+    @property  # type: ignore
+    @_self_reload(_MANAGER_NAME)
+    def is_valid(self) -> bool:
+        """Indicate if this data node is valid.
 
-    @classmethod
-    def _get_last_modified_datetime(cls, path: Optional[str] = None) -> Optional[datetime]:
-        if path and os.path.isfile(path):
-            return datetime.fromtimestamp(os.path.getmtime(path))
+        False if the data ever been written or the expiration date has passed.<br/>
+        True otherwise.
+        """
+        if not self._last_edit_date:
+            # Never been written so it is not valid
+            return False
+        if not self._validity_period:
+            # No validity period and has already been written, so it is valid
+            return True
+        if datetime.now() > self.expiration_date:
+            # expiration_date has been passed
+            return False
+        return True
 
-        last_modified_datetime = None
-        if path and os.path.isdir(path):
-            for filename in os.listdir(path):
-                filepath = os.path.join(path, filename)
-                if os.path.isfile(filepath):
-                    file_mtime = datetime.fromtimestamp(os.path.getmtime(filepath))
+    @property
+    def is_up_to_date(self) -> bool:
+        """Indicate if this data node is up-to-date.
 
-                    if last_modified_datetime is None or file_mtime > last_modified_datetime:
-                        last_modified_datetime = file_mtime
+        False if a preceding data node has been updated before the selected data node
+        or the selected data is invalid.<br/>
+        True otherwise.
+        """
+        if self.is_valid:
+            from ..scenario.scenario import Scenario
+            from ..taipy import get_parents
 
-        return last_modified_datetime
+            parent_scenarios: Set[Scenario] = get_parents(self)["scenario"]  # type: ignore
+            for parent_scenario in parent_scenarios:
+                for ancestor_node in nx.ancestors(parent_scenario._build_dag(), self):
+                    if (
+                        isinstance(ancestor_node, DataNode)
+                        and ancestor_node.last_edit_date
+                        and ancestor_node.last_edit_date > self.last_edit_date
+                    ):
+                        return False
+            return True
+        return False
 
     @classmethod
     @abstractmethod
     def storage_type(cls) -> str:
+        """The storage type of the data node.
+
+        Each subclass must implement this method exposing the data node storage type.
+        """
         raise NotImplementedError
 
     def read_or_raise(self) -> Any:
@@ -402,7 +420,7 @@ class DataNode(_Entity, _Labeled):
 
         Parameters:
             data (Any): The data to write to this data node.
-            job_id (JobId^): An optional identifier of the writer.
+            job_id (JobId): An optional identifier of the writer.
             **kwargs (dict[str, any]): Extra information to attach to the edit document
                 corresponding to this write.
         """
@@ -418,7 +436,7 @@ class DataNode(_Entity, _Labeled):
 
         Parameters:
             data (Any): The data to write to this data node.
-            job_id (JobId^): An optional identifier of the writer.
+            job_id (JobId): An optional identifier of the writer.
             **kwargs (dict[str, any]): Extra information to attach to the edit document
                 corresponding to this write.
         """
@@ -489,7 +507,7 @@ class DataNode(_Entity, _Labeled):
         self.editor_expiration_date = None
         self.edit_in_progress = False
 
-    def filter(self, operators: Union[List, Tuple], join_operator=JoinOperator.AND):
+    def filter(self, operators: Union[List, Tuple], join_operator=JoinOperator.AND) -> Any:
         """Read and filter the data referenced by this data node.
 
         The data is filtered by the provided list of 3-tuples (key, value, `Operator^`).
@@ -502,17 +520,52 @@ class DataNode(_Entity, _Labeled):
                 each is in the form of (key, value, `Operator^`).
             join_operator (JoinOperator^): The operator used to join the multiple filter
                 3-tuples.
+
         Returns:
             The filtered data.
+
         Raises:
             NotImplementedError: If the data type is not supported.
         """
         data = self._read()
         return _FilterDataNode._filter(data, operators, join_operator)
 
-    def __getitem__(self, item):
-        data = self._read()
-        return _FilterDataNode._filter_by_key(data, item)
+    def get_label(self) -> str:
+        """Returns the data node simple label prefixed by its owner label.
+
+        Returns:
+            The label of the data node as a string.
+        """
+        return self._get_label()
+
+    def get_simple_label(self) -> str:
+        """Returns the data node simple label.
+
+        Returns:
+            The simple label of the data node as a string.
+        """
+        return self._get_simple_label()
+
+    def get_parents(self) -> Dict[str, Set[_Entity]]:
+        """Get all parents of this data node.
+
+        Returns:
+            The dictionary of all parent entities.
+                They are grouped by their type (Scenario^, Sequences^, or tasks^) so each key corresponds
+                to a level of the parents and the value is a set of the parent entities.
+                An empty dictionary is returned if the entity does not have parents.
+        """
+        from ... import core as tp
+
+        return tp.get_parents(self)
+
+    def get_last_edit(self) -> Optional[Edit]:
+        """Get last `Edit` of this data node.
+
+        Returns:
+            None if there has been no `Edit` on this data node.
+        """
+        return self._edits[-1] if self._edits else None
 
     @abstractmethod
     def _read(self):
@@ -525,66 +578,33 @@ class DataNode(_Entity, _Labeled):
     def _write(self, data):
         raise NotImplementedError
 
-    @property  # type: ignore
-    @_self_reload(_MANAGER_NAME)
-    def is_ready_for_reading(self) -> bool:
-        """Indicate if this data node is ready for reading.
-
-        Returns:
-            False if the data is locked for modification or if the data has never been written.
-                True otherwise.
-        """
-        if self._edit_in_progress:
-            return False
-        if not self._last_edit_date:
-            # Never been written so it is not up-to-date
-            return False
-        return True
+    @staticmethod
+    def _new_id(config_id: str) -> DataNodeId:
+        """Generate a unique datanode identifier."""
+        return DataNodeId(
+            DataNode.__ID_SEPARATOR.join([DataNode._ID_PREFIX, _validate_id(config_id), str(uuid.uuid4())])
+        )
 
-    @property  # type: ignore
-    @_self_reload(_MANAGER_NAME)
-    def is_valid(self) -> bool:
-        """Indicate if this data node is valid.
+    def _get_user_properties(self) -> Dict[str, Any]:
+        """Get user properties."""
+        return {key: value for key, value in self.properties.items() if key not in self._TAIPY_PROPERTIES}
 
-        Returns:
-            False if the data ever been written or the expiration date has passed.<br/>
-            True otherwise.
-        """
-        if not self._last_edit_date:
-            # Never been written so it is not valid
-            return False
-        if not self._validity_period:
-            # No validity period and has already been written, so it is valid
-            return True
-        if datetime.now() > self.expiration_date:
-            # expiration_date has been passed
-            return False
-        return True
+    @classmethod
+    def _get_last_modified_datetime(cls, path: Optional[str] = None) -> Optional[datetime]:
+        if path and os.path.isfile(path):
+            return datetime.fromtimestamp(os.path.getmtime(path))
 
-    @property
-    def is_up_to_date(self) -> bool:
-        """Indicate if this data node is up-to-date.
+        last_modified_datetime = None
+        if path and os.path.isdir(path):
+            for filename in os.listdir(path):
+                filepath = os.path.join(path, filename)
+                if os.path.isfile(filepath):
+                    file_mtime = datetime.fromtimestamp(os.path.getmtime(filepath))
 
-        Returns:
-            False if a preceding data node has been updated before the selected data node
-            or the selected data is invalid.<br/>
-            True otherwise.
-        """
-        if self.is_valid:
-            from ..scenario.scenario import Scenario
-            from ..taipy import get_parents
+                    if last_modified_datetime is None or file_mtime > last_modified_datetime:
+                        last_modified_datetime = file_mtime
 
-            parent_scenarios: Set[Scenario] = get_parents(self)["scenario"]  # type: ignore
-            for parent_scenario in parent_scenarios:
-                for ancestor_node in nx.ancestors(parent_scenario._build_dag(), self):
-                    if (
-                        isinstance(ancestor_node, DataNode)
-                        and ancestor_node.last_edit_date
-                        and ancestor_node.last_edit_date > self.last_edit_date
-                    ):
-                        return False
-            return True
-        return False
+        return last_modified_datetime
 
     @staticmethod
     def _class_map():
@@ -604,22 +624,6 @@ class DataNode(_Entity, _Labeled):
 
         return class_map
 
-    def get_label(self) -> str:
-        """Returns the data node simple label prefixed by its owner label.
-
-        Returns:
-            The label of the data node as a string.
-        """
-        return self._get_label()
-
-    def get_simple_label(self) -> str:
-        """Returns the data node simple label.
-
-        Returns:
-            The simple label of the data node as a string.
-        """
-        return self._get_simple_label()
-
 
 @_make_event.register(DataNode)
 def _make_event_for_datanode(

+ 2 - 0
taipy/core/data/data_node_id.py

@@ -12,6 +12,8 @@
 from typing import Any, Dict, NewType
 
 DataNodeId = NewType("DataNodeId", str)
+"""Type that holds a `DataNode^` identifier."""
 DataNodeId.__doc__ = """Type that holds a `DataNode^` identifier."""
 Edit = NewType("Edit", Dict[str, Any])
+"""Type that holds a `DataNode^` edit information."""
 Edit.__doc__ = """Type that holds a `DataNode^` edit information."""

+ 30 - 49
taipy/core/data/excel.py

@@ -33,36 +33,16 @@ class ExcelDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
 
     The Excel file format is _xlsx_.
 
-    Attributes:
-        config_id (str): Identifier of this data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        path (str): The path to the Excel file.
-        properties (dict[str, Any]): A dictionary of additional properties. The _properties_
-            must have a _"default_path"_ or _"path"_ entry with the path of the Excel file:
-
-            - _"default_path"_ `(str)`: The path of the Excel file.\n
-            - _"has_header"_ `(bool)`: If True, indicates that the Excel file has a header.\n
-            - _"sheet_name"_ `(Union[List[str], str])`: The list of sheet names to be used. This
-                can be a unique name.\n
-            - _"exposed_type"_: The exposed type of the data read from Excel file. The default value is `pandas`.\n
+    The *properties* attribute can contain the following optional entries:
+
+    - *sheet_name* (`Union[str, List[str]]`): The name of the sheet(s) to be used.
+    - *default_path* (`str`): The default path of the Excel file used at the instantiation of the
+        data node.
+    - *default_data*: The default data of the data node. It is used at the data node instantiation
+        to write the data to the Excel file.
+    - *has_header* (`bool`): If True, indicates that the Excel file has a header.
+    - *exposed_type* (`str`): The exposed type of the data read from Excel file. The default value
+        is `pandas`.
     """
 
     __STORAGE_TYPE = "excel"
@@ -136,8 +116,26 @@ class ExcelDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "excel"."""
         return cls.__STORAGE_TYPE
 
+    def write_with_column_names(self, data: Any, columns: List[str] = None, job_id: Optional[JobId] = None) -> None:
+        """Write a set of columns.
+
+        Parameters:
+            data (Any): The data to write.
+            columns (List[str]): The list of column names to write.
+            job_id (Optional[JobId]): An optional identifier of the writer.
+        """
+        if isinstance(data, Dict) and all(isinstance(x, (pd.DataFrame, np.ndarray)) for x in data.values()):
+            self._write_excel_with_multiple_sheets(data, columns=columns)
+        else:
+            df = pd.DataFrame(data)
+            if columns:
+                df = self._set_column_if_dataframe(df, columns)
+            self._write_excel_with_single_sheet(df.to_excel, self.path, index=False)
+        self.track_edit(timestamp=datetime.now(), job_id=job_id)
+
     @staticmethod
     def _check_exposed_type(exposed_type):
         if isinstance(exposed_type, str):
@@ -169,7 +167,7 @@ class ExcelDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
         if sheet_exposed_type == self._EXPOSED_TYPE_NUMPY:
             return self._read_as_numpy(path, sheet_name)
         elif sheet_exposed_type == self._EXPOSED_TYPE_PANDAS:
-            return self._read_as_pandas_dataframe(path, sheet_name)
+            return self._read_as_pandas_dataframe(path, sheet_name)  # type: ignore
         return None
 
     def _read_as(self, path: str):
@@ -220,7 +218,7 @@ class ExcelDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
                 else:
                     for i, row in enumerate(res):
                         res[i] = sheet_exposed_type(*row)
-                work_books[sheet_name] = res
+                work_books[sheet_name] = res  # type: ignore
         finally:
             excel_file.close()
 
@@ -338,20 +336,3 @@ class ExcelDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
             self._write_excel_with_single_sheet(
                 data.to_excel, self._path, index=False, header=properties[self._HAS_HEADER_PROPERTY] or None
             )
-
-    def write_with_column_names(self, data: Any, columns: List[str] = None, job_id: Optional[JobId] = None):
-        """Write a set of columns.
-
-        Parameters:
-            data (Any): The data to write.
-            columns (List[str]): The list of column names to write.
-            job_id (JobId^): An optional identifier of the writer.
-        """
-        if isinstance(data, Dict) and all(isinstance(x, (pd.DataFrame, np.ndarray)) for x in data.values()):
-            self._write_excel_with_multiple_sheets(data, columns=columns)
-        else:
-            df = pd.DataFrame(data)
-            if columns:
-                df = self._set_column_if_dataframe(df, columns)
-            self._write_excel_with_single_sheet(df.to_excel, self.path, index=False)
-        self.track_edit(timestamp=datetime.now(), job_id=job_id)

+ 12 - 28
taipy/core/data/generic.py

@@ -23,34 +23,17 @@ from .data_node_id import DataNodeId, Edit
 class GenericDataNode(DataNode):
     """Generic Data Node that uses custom read and write functions.
 
-    The read and write function for this data node type can be implemented is Python.
-
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of the data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties. Note that the
-            _properties_ parameter must at least contain an entry for either _"read_fct"_ or
-            _"write_fct"_ representing the read and write functions.
-            Entries for _"read_fct_args"_ and _"write_fct_args"_ respectively represent
-            potential parameters for the _"read_fct"_ and _"write_fct"_ functions.
+    The read and write functions for this data node type are Python functions.
+
+    The *properties* attribute must contain at least one of the two following entries:
+
+    - *read_fct* (`Callable`): The read function for the data node.
+    - *write_fct* (`Callable`): The write function for the data node.
+
+    The *properties* attribute can also contain the following optional entries:
+
+    - *read_fct_args* (`List[Any]`): The arguments to be passed to the read function.
+    - *write_fct_args* (`List[Any]`): The arguments to be passed to the write function.
     """
 
     __STORAGE_TYPE = "generic"
@@ -122,6 +105,7 @@ class GenericDataNode(DataNode):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Returns the storage type of the data node: "generic"."""
         return cls.__STORAGE_TYPE
 
     def _read(self):

+ 7 - 27
taipy/core/data/in_memory.py

@@ -26,34 +26,13 @@ class InMemoryDataNode(DataNode):
 
     Warning:
         This Data Node implementation is not compatible with a parallel execution of taipy tasks,
-        but only with a task executor in development mode. The purpose of `InMemoryDataNode` is to be used
-        for development or debugging.
+        but only with a task executor in development mode. The purpose of `InMemoryDataNode` is
+        mostly to be used for development, prototyping, or debugging.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties. When creating an
-            _In Memory_ data node, if the _properties_ dictionary contains a _"default_data"_
-            entry, the data node is automatically written with the corresponding _"default_data"_
-            value.
+    The *properties* attribute can also contain the following optional entries:
+
+    - *default_data* (`Any`): The default data of the data node. It is used at the data node
+        instantiation
     """
 
     __STORAGE_TYPE = "in_memory"
@@ -111,6 +90,7 @@ class InMemoryDataNode(DataNode):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "in_memory"."""
         return cls.__STORAGE_TYPE
 
     def _read(self):

+ 14 - 33
taipy/core/data/json.py

@@ -28,35 +28,13 @@ from .data_node_id import DataNodeId, Edit
 class JSONDataNode(DataNode, _FileDataNodeMixin):
     """Data Node stored as a JSON file.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. This string must be a valid
-            Python identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        path (str): The path to the JSON file.
-        encoder (json.JSONEncoder): The JSON encoder that is used to write into the JSON file.
-        decoder (json.JSONDecoder): The JSON decoder that is used to read from the JSON file.
-        properties (dict[str, Any]): A dictionary of additional properties. The _properties_
-            must have a _"default_path"_ or _"path"_ entry with the path of the JSON file:
-
-            - _"default_path"_ `(str)`: The default path of the CSV file.\n
-            - _"encoding"_ `(str)`: The encoding of the CSV file. The default value is `utf-8`.\n
-            - _"default_data"_: The default data of the data nodes instantiated from this json data node.\n
+    The *properties* attribute can contain the following optional entries:
+
+    - *default_path* (`str`): The default path of the JSON file used at the instantiation of
+        the data node.
+    - *default_data* (`Any`): The default data of the data node. It is used at the data node
+        instantiation to write the data to the JSON file.
+    - *encoding* (`str`): The encoding of the JSON file. The default value is `utf-8`.\n
     """
 
     __STORAGE_TYPE = "json"
@@ -129,24 +107,27 @@ class JSONDataNode(DataNode, _FileDataNodeMixin):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "json"."""
         return cls.__STORAGE_TYPE
 
     @property  # type: ignore
     @_self_reload(DataNode._MANAGER_NAME)
-    def encoder(self):
+    def encoder(self) -> json.JSONEncoder:
+        """The JSON encoder that is used to write into the JSON file."""
         return self._encoder
 
     @encoder.setter
-    def encoder(self, encoder: json.JSONEncoder):
+    def encoder(self, encoder: json.JSONEncoder) -> None:
         self.properties[self._ENCODER_KEY] = encoder
 
     @property  # type: ignore
     @_self_reload(DataNode._MANAGER_NAME)
-    def decoder(self):
+    def decoder(self) -> json.JSONDecoder:
+        """The JSON decoder that is used to read from the JSON file."""
         return self._decoder
 
     @decoder.setter
-    def decoder(self, decoder: json.JSONDecoder):
+    def decoder(self, decoder: json.JSONDecoder) -> None:
         self.properties[self._DECODER_KEY] = decoder
 
     def _read(self):

+ 26 - 42
taipy/core/data/mongo.py

@@ -31,41 +31,22 @@ from .data_node_id import DataNodeId, Edit
 class MongoCollectionDataNode(DataNode):
     """Data Node stored in a Mongo collection.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            None.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties. Note that the
-            _properties_ parameter must at least contain an entry for _"db_name"_ and _"collection_name"_:
-
-            - _"db_name"_ `(str)`: The database name.\n
-            - _"collection_name"_ `(str)`: The collection in the database to read from and to write the data to.\n
-            - _"custom_document"_ `(Any)`: The custom document class to store, encode, and decode data when reading and
-                writing to a Mongo collection.\n
-            - _"db_username"_ `(str)`: The database username.\n
-            - _"db_password"_ `(str)`: The database password.\n
-            - _"db_host"_ `(str)`: The database host. The default value is _"localhost"_.\n
-            - _"db_port"_ `(int)`: The database port. The default value is 27017.\n
-            - _"db_driver"_ `(str)`: The database driver.\n
-            - _"db_extra_args"_ `(Dict[str, Any])`: A dictionary of additional arguments to be passed into database
-                connection string.\n
+    The *properties* attribute must contain the following mandatory entries:
+
+    - *db_name* (`str`): The database name.
+    - *collection_name* (`str`): The collection in the database to read from and to write the data to.
+
+    The *properties* attribute can also contain the following optional entries:
+
+    - *custom_document* (`Any`): The custom document class to store, encode, and decode data when reading
+        and writing to a Mongo collection.
+    - *db_username* (`str`): The database username.
+    - *db_password* (`str`): The database password.
+    - *db_host* (`str`): The database host. The default value is *"localhost"*.
+    - *db_port* (`str`): The database port. The default value is *27017*.
+    - *db_driver* (`str`): The database driver.
+    - *db_extra_args* (`Dict[str, Any]`): A dictionary of additional arguments to be passed into
+        database connection string.
     """
 
     __STORAGE_TYPE = "mongo_collection"
@@ -172,17 +153,12 @@ class MongoCollectionDataNode(DataNode):
             }
         )
 
-    def _check_custom_document(self, custom_document):
-        if not isclass(custom_document):
-            raise InvalidCustomDocument(
-                f"Invalid custom document of {custom_document}. Only custom class are supported."
-            )
-
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "mongo_collection"."""
         return cls.__STORAGE_TYPE
 
-    def filter(self, operators: Optional[Union[List, Tuple]] = None, join_operator=JoinOperator.AND):
+    def filter(self, operators: Optional[Union[List, Tuple]] = None, join_operator=JoinOperator.AND) -> List:
         cursor = self._read_by_query(operators, join_operator)
         return [self._decoder(row) for row in cursor]
 
@@ -267,11 +243,18 @@ class MongoCollectionDataNode(DataNode):
 
         self.collection.insert_many(data)
 
+    def _check_custom_document(self, custom_document):
+        if not isclass(custom_document):
+            raise InvalidCustomDocument(
+                f"Invalid custom document of {custom_document}. Only custom class are supported."
+            )
+
     def _default_decoder(self, document: Dict) -> Any:
         """Decode a Mongo dictionary to a custom document object for reading.
 
         Parameters:
             document (Dict): the document dictionary return by Mongo query.
+
         Returns:
             A custom document object.
         """
@@ -287,3 +270,4 @@ class MongoCollectionDataNode(DataNode):
             The document dictionary.
         """
         return document_object.__dict__
+

+ 1 - 3
taipy/core/data/operator.py

@@ -34,9 +34,7 @@ class Operator(Enum):
 
 
 class JoinOperator(Enum):
-    """
-    Enumeration of join operators for Data Node filtering. The possible values are `AND` and `OR`.
-    """
+    """Enumeration of join operators for Data Node filtering. The possible values are `AND` and `OR`."""
 
     AND = 1
     OR = 2

+ 64 - 80
taipy/core/data/parquet.py

@@ -31,46 +31,28 @@ from .data_node_id import DataNodeId, Edit
 class ParquetDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
     """Data Node stored as a Parquet file.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. This string must be a valid
-            Python identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        path (str): The path to the Parquet file.
-        properties (dict[str, Any]): A dictionary of additional properties. *properties*
-            must have a *"default_path"* or *"path"* entry with the path of the Parquet file:
-
-            - *"default_path"* (`str`): The default path of the Parquet file.
-            - *"exposed_type"*: The exposed type of the data read from Parquet file. The default
-                value is `pandas`.
-            - *"engine"* (`Optional[str]`): Parquet library to use. Possible values are
-                *"fastparquet"* or *"pyarrow"*.<br/>
-                The default value is *"pyarrow"*.
-            - *"compression"* (`Optional[str]`): Name of the compression to use. Possible values
-                are *"snappy"*, *"gzip"*, *"brotli"*, or *"none"* (no compression).<br/>
-                The default value is *"snappy"*.
-            - *"read_kwargs"* (`Optional[dict]`): Additional parameters passed to the
-                *pandas.read_parquet()* function.
-            - *"write_kwargs"* (`Optional[dict]`): Additional parameters passed to the
-                *pandas.DataFrame.write_parquet()* function.
-                The parameters in *"read_kwargs"* and *"write_kwargs"* have a
-                **higher precedence** than the top-level parameters which are also passed to
-                Pandas.
+    The *properties* attribute can contain the following optional entries:
+
+    - *default_path* (`str`): The default path of the Parquet file used at the instantiation of
+        the data node.
+    - *default_data* (`Any`): The default data of the data node. It is used at the data node
+        instantiation to write the data to the Parquet file.
+    - *has_header* (`bool`): If True, indicates that the Parquet file has a header.
+    - *exposed_type* (`str`): The exposed type of the data read from Parquet
+        file.<br/> The default value is `pandas`.
+    - *engine* (`Optional[str]`): Parquet library to use. Possible values are
+        *"fastparquet"* or *"pyarrow"*.<br/> The default value is *"pyarrow"*.
+    - *compression* (`Optional[str]`): Name of the compression to use. Possible values
+        are *"snappy"*, *"gzip"*, *"brotli"*, or *"none"* (no compression).<br/>
+        The default value is *"snappy"*.
+    - *read_kwargs* (`Optional[dict]`): Additional parameters passed to the
+        *pandas.read_parquet()* function when reading the data.<br/>
+        The parameters in *"read_kwargs"* have a **higher precedence** than the top-level
+        parameters which are also passed to Pandas.
+    - *write_kwargs* (`Optional[dict]`): Additional parameters passed to the
+        *pandas.DataFrame.write_parquet()* function when writing the data. <br/>
+        The parameters in *"write_kwargs"* have a **higher precedence** than the
+        top-level parameters which are also passed to Pandas.
     """
 
     __STORAGE_TYPE = "parquet"
@@ -178,8 +160,48 @@ class ParquetDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "parquet"."""
         return cls.__STORAGE_TYPE
 
+    def _write_with_kwargs(self, data: Any, job_id: Optional[JobId] = None, **write_kwargs):
+        """Write the data referenced by this data node.
+
+        Keyword arguments here which are also present in the Data Node config will overwrite them.
+
+        Parameters:
+            data (Any): The data to write.
+            job_id (JobId): An optional identifier of the writer.
+            **write_kwargs (dict[str, any]): The keyword arguments passed to the function
+                `pandas.DataFrame.to_parquet()`.
+        """
+        properties = self.properties
+        kwargs = {
+            self.__ENGINE_PROPERTY: properties[self.__ENGINE_PROPERTY],
+            self.__COMPRESSION_PROPERTY: properties[self.__COMPRESSION_PROPERTY],
+        }
+        kwargs.update(properties[self.__WRITE_KWARGS_PROPERTY])
+        kwargs.update(write_kwargs)
+
+        df = self._convert_data_to_dataframe(properties[self._EXPOSED_TYPE_PROPERTY], data)
+        if isinstance(df, pd.Series):
+            df = pd.DataFrame(df)
+
+        # Ensure that the columns are strings, otherwise writing will fail with pandas 1.3.5
+        df.columns = df.columns.astype(str)
+        df.to_parquet(self._path, **kwargs)
+        self.track_edit(timestamp=datetime.now(), job_id=job_id)
+
+    def read_with_kwargs(self, **read_kwargs):
+        """Read data from this data node.
+
+        Keyword arguments here which are also present in the Data Node config will overwrite them.
+
+        Parameters:
+            **read_kwargs (dict[str, any]): The keyword arguments passed to the function
+                `pandas.read_parquet()`.
+        """
+        return self._read_from_path(**read_kwargs)
+
     def _read(self):
         return self._read_from_path()
 
@@ -224,46 +246,8 @@ class ParquetDataNode(DataNode, _FileDataNodeMixin, _TabularDataNodeMixin):
         return pd.read_parquet(path, **read_kwargs)
 
     def _append(self, data: Any):
-        self.write_with_kwargs(data, engine="fastparquet", append=True)
+        self._write_with_kwargs(data, engine="fastparquet", append=True)
 
     def _write(self, data: Any):
-        self.write_with_kwargs(data)
+        self._write_with_kwargs(data)
 
-    def write_with_kwargs(self, data: Any, job_id: Optional[JobId] = None, **write_kwargs):
-        """Write the data referenced by this data node.
-
-        Keyword arguments here which are also present in the Data Node config will overwrite them.
-
-        Parameters:
-            data (Any): The data to write.
-            job_id (JobId^): An optional identifier of the writer.
-            **write_kwargs (dict[str, any]): The keyword arguments passed to the function
-                `pandas.DataFrame.to_parquet()`.
-        """
-        properties = self.properties
-        kwargs = {
-            self.__ENGINE_PROPERTY: properties[self.__ENGINE_PROPERTY],
-            self.__COMPRESSION_PROPERTY: properties[self.__COMPRESSION_PROPERTY],
-        }
-        kwargs.update(properties[self.__WRITE_KWARGS_PROPERTY])
-        kwargs.update(write_kwargs)
-
-        df = self._convert_data_to_dataframe(properties[self._EXPOSED_TYPE_PROPERTY], data)
-        if isinstance(df, pd.Series):
-            df = pd.DataFrame(df)
-
-        # Ensure that the columns are strings, otherwise writing will fail with pandas 1.3.5
-        df.columns = df.columns.astype(str)
-        df.to_parquet(self._path, **kwargs)
-        self.track_edit(timestamp=datetime.now(), job_id=job_id)
-
-    def read_with_kwargs(self, **read_kwargs):
-        """Read data from this data node.
-
-        Keyword arguments here which are also present in the Data Node config will overwrite them.
-
-        Parameters:
-            **read_kwargs (dict[str, any]): The keyword arguments passed to the function
-                `pandas.read_parquet()`.
-        """
-        return self._read_from_path(**read_kwargs)

+ 7 - 27
taipy/core/data/pickle.py

@@ -25,33 +25,12 @@ from .data_node_id import DataNodeId, Edit
 class PickleDataNode(DataNode, _FileDataNodeMixin):
     """Data Node stored as a pickle file.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            `None`.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties.
-            When creating a pickle data node, if the _properties_ dictionary contains a
-            _"default_data"_ entry, the data node is automatically written with the corresponding
-            _"default_data"_ value.
-            If the _properties_ dictionary contains a _"default_path"_ or _"path"_ entry, the data will be stored
-            using the corresponding value as the name of the pickle file.
+    The *properties* attribute can contain the following optional entries:
+
+    - *default_path* (`str`): The default path of the Pickle file used at the instantiation of the
+        data node.
+    - *default_data*: The default data of the data node. It is used at the data node instantiation
+        to write the data to the Pickle file.
     """
 
     __STORAGE_TYPE = "pickle"
@@ -113,6 +92,7 @@ class PickleDataNode(DataNode, _FileDataNodeMixin):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "pickle"."""
         return cls.__STORAGE_TYPE
 
     def _read(self):

+ 28 - 44
taipy/core/data/sql.py

@@ -25,50 +25,33 @@ from .data_node_id import DataNodeId, Edit
 class SQLDataNode(_AbstractSQLDataNode):
     """Data Node stored in a SQL database.
 
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            None.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties. Note that the
-            _properties_ parameter must at least contain an entry for _"db_name"_, _"db_engine"_, _"read_query"_,
-            and _"write_query_builder"_:
-
-            - _"db_name"_ `(str)`: The database name, or the name of the SQLite database file.
-            - _"db_engine"_ `(str)`: The database engine. Possible values are _"sqlite"_, _"mssql"_, _"mysql"_, or
-                _"postgresql"_.
-            - _"read_query"_ `(str)`: The SQL query string used to read the data from the database.
-            - _"write_query_builder"_ `(Callable)`: A callback function that takes the data as an input parameter and
-                returns a list of SQL queries to be executed when writing data to the data node.
-            - _"append_query_builder"_ `(Callable)`: A callback function that takes the data as an input parameter and
-                returns a list of SQL queries to be executed when appending data to the data node.
-            - _"db_username"_ `(str)`: The database username.
-            - _"db_password"_ `(str)`: The database password.
-            - _"db_host"_ `(str)`: The database host. The default value is _"localhost"_.
-            - _"db_port"_ `(int)`: The database port. The default value is 1433.
-            - _"db_driver"_ `(str)`: The database driver.
-            - _"sqlite_folder_path"_ (str): The path to the folder that contains SQLite file. The default value
-                is the current working folder.
-            - _"sqlite_file_extension"_ (str): The filename extension of the SQLite file. The default value is ".db".
-            - _"db_extra_args"_ `(Dict[str, Any])`: A dictionary of additional arguments to be passed into database
-                connection string.
-            - _"exposed_type"_: The exposed type of the data read from SQL query. The default value is `pandas`.
+    The *properties* attribute must contain the following mandatory entries:
+
+    - *has_header* (`bool`): If True, indicates that the SQL query has a header.
+    - *exposed_type* (`str`): The exposed type of the data read from SQL query. The default value is `pandas`.
+    - *db_name* (`str`): The database name, or the name of the SQLite database file.
+    - *db_engine* (`str`): The database engine. Possible values are *sqlite*, *mssql*,
+        *mysql*, or *postgresql*.
+    - *read_query* (`str`): The SQL query string used to read the data from the database.
+    - *write_query_builder* `(Callable)`: A callback function that takes the data as an input
+        parameter and returns a list of SQL queries to be executed when writing data to the data
+        node.
+    - *append_query_builder* (`Callable`): A callback function that takes the data as an input
+        parameter and returns a list of SQL queries to be executed when appending data to the
+        data node.
+    - *db_username* (`str`): The database username.
+    - *db_password* (`str`): The database password.
+    - *db_host* (`str`): The database host. The default value is *localhost*.
+    - *db_port* (`int`): The database port. The default value is 1433.
+    - *db_driver* (`str`): The database driver.
+
+    The *properties* attribute can also contain the following optional entries:
+
+    - *sqlite_folder_path* (str): The path to the folder that contains SQLite file. The default value
+        is the current working folder.
+    - *sqlite_file_extension* (str): The filename extension of the SQLite file. The default value is ".db".
+    - *db_extra_args* (`Dict[str, Any]`): A dictionary of additional arguments to be passed into database
+        connection string.
     """
 
     __STORAGE_TYPE = "sql"
@@ -125,6 +108,7 @@ class SQLDataNode(_AbstractSQLDataNode):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: "sql"."""
         return cls.__STORAGE_TYPE
 
     def _get_base_read_query(self) -> str:

+ 22 - 40
taipy/core/data/sql_table.py

@@ -25,46 +25,27 @@ from .data_node_id import DataNodeId, Edit
 
 class SQLTableDataNode(_AbstractSQLDataNode):
     """Data Node stored in a SQL table.
-
-    Attributes:
-        config_id (str): Identifier of the data node configuration. It must be a valid Python
-            identifier.
-        scope (Scope^): The scope of this data node.
-        id (str): The unique identifier of this data node.
-        owner_id (str): The identifier of the owner (sequence_id, scenario_id, cycle_id) or
-            None.
-        parent_ids (Optional[Set[str]]): The identifiers of the parent tasks or `None`.
-        last_edit_date (datetime): The date and time of the last modification.
-        edits (List[Edit^]): The ordered list of edits for that job.
-        version (str): The string indicates the application version of the data node to instantiate. If not provided,
-            the current version is used.
-        validity_period (Optional[timedelta]): The duration implemented as a timedelta since the last edit date for
-            which the data node can be considered up-to-date. Once the validity period has passed, the data node is
-            considered stale and relevant tasks will run even if they are skippable (see the
-            [Task management](../../userman/scenario_features/sdm/task/index.md) page for more details).
-            If _validity_period_ is set to `None`, the data node is always up-to-date.
-        edit_in_progress (bool): True if a task computing the data node has been submitted
-            and not completed yet. False otherwise.
-        editor_id (Optional[str]): The identifier of the user who is currently editing the data node.
-        editor_expiration_date (Optional[datetime]): The expiration date of the editor lock.
-        properties (dict[str, Any]): A dictionary of additional properties. Note that the
-            _properties_ parameter must at least contain an entry for _"db_name"_, _"db_engine"_, _"table_name"_:
-
-            - _"db_name"_ `(str)`: The database name, or the name of the SQLite database file.
-            - _"db_engine"_ `(str)`: The database engine. For now, the accepted values are _"sqlite"_, _"mssql"_,
-                _"mysql"_, or _"postgresql"_.
-            - _"table_name"_ `(str)`: The name of the SQL table.
-            - _"db_username"_ `(str)`: The database username.
-            - _"db_password"_ `(str)`: The database password.
-            - _"db_host"_ `(str)`: The database host. The default value is _"localhost"_.
-            - _"db_port"_ `(int)`: The database port. The default value is 1433.
-            - _"db_driver"_ `(str)`: The database driver.
-            - _"sqlite_folder_path"_ (str): The path to the folder that contains SQLite file. The default value
-                is the current working folder.
-            - _"sqlite_file_extension"_ (str): The filename extension of the SQLite file. The default value is ".db".
-            - _"db_extra_args"_ `(Dict[str, Any])`: A dictionary of additional arguments to be passed into database
-                connection string.
-            - _"exposed_type"_: The exposed type of the data read from SQL query. The default value is `pandas`.
+    The *properties* attribute must contain the following mandatory entries:
+
+    - *has_header* (`bool`): If True, indicates that the SQL query has a header.
+    - *exposed_type* (`str`): The exposed type of the data read from SQL query. The default value is `pandas`.
+    - *db_name* (`str`): The database name, or the name of the SQLite database file.
+    - *db_engine* (`str`): The database engine. Possible values are *sqlite*, *mssql*,
+        *mysql*, or *postgresql*.
+    - *table_name* (`str`): The name of the SQL table.
+    - *db_username* (`str`): The database username.
+    - *db_password* (`str`): The database password.
+    - *db_host* (`str`): The database host. The default value is *localhost*.
+    - *db_port* (`int`): The database port. The default value is 1433.
+    - *db_driver* (`str`): The database driver.
+
+    The *properties* attribute can also contain the following optional entries:
+
+    - *sqlite_folder_path* (str): The path to the folder that contains SQLite file. The default value
+        is the current working folder.
+    - *sqlite_file_extension* (str): The filename extension of the SQLite file. The default value is ".db".
+    - *db_extra_args* (`Dict[str, Any]`): A dictionary of additional arguments to be passed into database
+        connection string.
     """
 
     __STORAGE_TYPE = "sql_table"
@@ -109,6 +90,7 @@ class SQLTableDataNode(_AbstractSQLDataNode):
 
     @classmethod
     def storage_type(cls) -> str:
+        """Return the storage type of the data node: `sql_table`."""
         return cls.__STORAGE_TYPE
 
     def _get_base_read_query(self) -> str:

+ 2 - 1
taipy/core/exceptions/__init__.py

@@ -8,5 +8,6 @@
 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
-"""Exceptions raised by `core` package functionalities."""
+
+"""Exceptions raised by core package functionalities."""
 from .exceptions import *

+ 82 - 82
taipy/core/job/job.py

@@ -53,23 +53,14 @@ class Job(_Entity, _Labeled):
     and the **stacktrace** of any exception that may be raised by the user function.
 
     In addition, a job notifies scenario or sequence subscribers on its status change.
-
-    Attributes:
-        id (str): The identifier of this job.
-        task (Task^): The task of this job.
-        force (bool): Enforce the job's execution whatever the output data nodes are in cache or
-            not.
-        status (Status^): The current status of this job.
-        creation_date (datetime): The date of this job's creation.
-        stacktrace (List[str]): The list of stacktraces of the exceptions raised during the
-            execution.
-        version (str): The string indicates the application version of the job to instantiate.
-            If not provided, the latest version is used.
     """
 
     _MANAGER_NAME = "job"
     _ID_PREFIX = "JOB"
 
+    id: JobId
+    """The identifier of this job."""
+
     def __init__(self, id: JobId, task: "Task", submit_id: str, submit_entity_id: str, force=False, version=None):
         self.id = id
         self._task = task
@@ -84,12 +75,14 @@ class Job(_Entity, _Labeled):
         self.__logger = _TaipyLogger._get_logger()
         self._version = version or _VersionManagerFactory._build_manager()._get_latest_version()
 
-    def get_event_context(self):
-        return {"task_config_id": self._task.config_id}
+    def __hash__(self) -> int:
+        """Return the hash of the job id."""
+        return hash(self.id)
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def task(self):
+    def task(self) -> "Task":
+        """The task associated to this job."""
         return self._task
 
     @task.setter  # type: ignore
@@ -99,11 +92,13 @@ class Job(_Entity, _Labeled):
 
     @property
     def owner_id(self) -> str:
+        """The identifier of the task of this job."""
         return self.task.id
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def force(self):
+    def force(self) -> bool:
+        """Enforce the job's execution whatever the output data nodes are in cache or not."""
         return self._force
 
     @force.setter  # type: ignore
@@ -112,22 +107,26 @@ class Job(_Entity, _Labeled):
         self._force = val
 
     @property
-    def submit_id(self):
+    def submit_id(self) -> str:
+        """The identifier of the submission that triggered the job creation."""
         return self._submit_id
 
     @property
-    def submit_entity_id(self):
+    def submit_entity_id(self) -> str:
+        """The identifier of the submitted entity that triggered the job creation."""
         return self._submit_entity_id
 
     @property  # type: ignore
     def submit_entity(self):
+        """The submitted entity that triggered the job creation."""
         from ..taipy import get as tp_get
 
         return tp_get(self._submit_entity_id)
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def status(self):
+    def status(self) -> Status:
+        """The current status of this job."""
         return self._status
 
     @status.setter  # type: ignore
@@ -138,7 +137,8 @@ class Job(_Entity, _Labeled):
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def creation_date(self):
+    def creation_date(self) -> datetime:
+        """The date time when the job was created."""
         return self._creation_date
 
     @creation_date.setter  # type: ignore
@@ -149,32 +149,24 @@ class Job(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def submitted_at(self) -> datetime:
-        """Get the date time when the job was submitted.
-
-        Returns:
-            datetime: The date time when the job was submitted.
-        """
+        """The date time when the job was submitted."""
         return self._status_change_records["SUBMITTED"]
 
     @property
     @_self_reload(_MANAGER_NAME)
     def run_at(self) -> Optional[datetime]:
-        """Get the date time when the job was run.
+        """The date time when the job was run.
 
-        Returns:
-            Optional[datetime]: The date time when the job was run.
-                If the job is not run, None is returned.
+        If the job has not been run, the run_at time is None.
         """
         return self._status_change_records.get(Status.RUNNING.name, None)
 
     @property
     @_self_reload(_MANAGER_NAME)
     def finished_at(self) -> Optional[datetime]:
-        """Get the date time when the job was finished.
+        """The date time when the job was finished.
 
-        Returns:
-            Optional[datetime]: The date time when the job was finished.
-                If the job is not finished, None is returned.
+        If the job is not finished, the finished_at time is None.
         """
         if self.is_finished():
             if self.is_completed():
@@ -193,14 +185,12 @@ class Job(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def execution_duration(self) -> Optional[float]:
-        """Get the duration of the job execution in seconds.
-        The execution time is the duration from the job running to the job completion.
+        """The duration of the job execution in seconds.
 
-        Returns:
-            Optional[float]: The duration of the job execution in seconds.
-                - If the job was not run, None is returned.
-                - If the job is not finished, the execution time is the duration
-                  from the running time to the current time.
+        The execution duration is the duration in seconds from the job running time to the
+        job completion time. If the job was not run, the execution duration is None. If the
+        job is not finished yet, the execution duration is the duration from the running time
+        to the current time.
         """
         if Status.RUNNING.name not in self._status_change_records:
             return None
@@ -213,13 +203,10 @@ class Job(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def pending_duration(self) -> Optional[float]:
-        """Get the duration of the job in the pending state in seconds.
+        """The duration of the job in seconds spent in the pending status.
 
-        Returns:
-            Optional[float]: The duration of the job in the pending state in seconds.
-                - If the job is not running, None is returned.
-                - If the job is not pending, the pending time is the duration
-                  from the submission to the current time.
+        If the job has never been pending, the pending_duration is None. If the
+        job is currently pending, the pending duration is the duration from the submission to the current time.
         """
         if Status.PENDING.name not in self._status_change_records:
             return None
@@ -234,13 +221,11 @@ class Job(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def blocked_duration(self) -> Optional[float]:
-        """Get the duration of the job in the blocked state in seconds.
+        """The duration of the job in seconds spent in the blocked status.
 
-        Returns:
-            Optional[float]: The duration of the job in the blocked state in seconds.
-                - If the job is not running, None is returned.
-                - If the job is not blocked, the blocked time is the duration
-                  from the submission to the current time.
+        The duration of the job in seconds spent in the blocked status. If the job has
+        never been blocked, None is returned. If the job is currently blocked, the blocked
+        duration is the duration from the submission to the current time.
         """
         if Status.BLOCKED.name not in self._status_change_records:
             return None
@@ -259,6 +244,7 @@ class Job(_Entity, _Labeled):
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def stacktrace(self) -> List[str]:
+        """The list of stacktraces of the exceptions raised during the execution."""
         return self._stacktrace
 
     @stacktrace.setter  # type: ignore
@@ -267,65 +253,75 @@ class Job(_Entity, _Labeled):
         self._stacktrace = val
 
     @property
-    def version(self):
+    def version(self) -> str:
+        """The application version of the job.
+
+        If not provided, the latest version is used.
+        """
         return self._version
 
-    def __contains__(self, task: "Task"):
+    def __contains__(self, task: "Task") -> bool:
+        """Check if the job contains the task."""
         return self.task.id == task.id
 
-    def __lt__(self, other):
+    def __lt__(self, other) -> bool:
+        """Compare the creation date of the job with another job."""
         return self.creation_date.timestamp() < other.creation_date.timestamp()
 
-    def __le__(self, other):
+    def __le__(self, other) -> bool:
+        """Compare the creation date of the job with another job."""
         return self.creation_date.timestamp() <= other.creation_date.timestamp()
 
-    def __gt__(self, other):
+    def __gt__(self, other) -> bool:
+        """Compare the creation date of the job with another job."""
         return self.creation_date.timestamp() > other.creation_date.timestamp()
 
-    def __ge__(self, other):
+    def __ge__(self, other) -> bool:
+        """Compare the creation date of the job with another job."""
         return self.creation_date.timestamp() >= other.creation_date.timestamp()
 
-    def __eq__(self, other):
+    def __eq__(self, other) -> bool:
+        """Check if the job is equal to another job."""
         return isinstance(other, Job) and self.id == other.id
 
     @_run_callbacks
-    def blocked(self):
+    def blocked(self) -> None:
         """Set the status to _blocked_ and notify subscribers."""
         self.status = Status.BLOCKED
 
     @_run_callbacks
-    def pending(self):
+    def pending(self) -> None:
         """Set the status to _pending_ and notify subscribers."""
         self.status = Status.PENDING
 
     @_run_callbacks
-    def running(self):
+    def running(self) -> None:
         """Set the status to _running_ and notify subscribers."""
         self.status = Status.RUNNING
 
     @_run_callbacks
-    def canceled(self):
+    def canceled(self) -> None:
         """Set the status to _canceled_ and notify subscribers."""
         self.status = Status.CANCELED
 
     @_run_callbacks
-    def abandoned(self):
+    def abandoned(self) -> None:
         """Set the status to _abandoned_ and notify subscribers."""
         self.status = Status.ABANDONED
 
     @_run_callbacks
-    def failed(self):
+    def failed(self) -> None:
         """Set the status to _failed_ and notify subscribers."""
         self.status = Status.FAILED
 
     @_run_callbacks
-    def completed(self):
+    def completed(self) -> None:
         """Set the status to _completed_ and notify subscribers."""
         self.status = Status.COMPLETED
         self.__logger.info(f"job {self.id} is completed.")
 
     @_run_callbacks
-    def skipped(self):
+    def skipped(self) -> None:
         """Set the status to _skipped_ and notify subscribers."""
         self.status = Status.SKIPPED
 
@@ -404,20 +400,24 @@ class Job(_Entity, _Labeled):
     def is_finished(self) -> bool:
         """Indicate if the job is finished.
 
+        A job is considered finished if it is completed, failed, canceled, skipped, or abandoned.
+
         Returns:
             True if the job is finished.
         """
         return self.is_completed() or self.is_failed() or self.is_canceled() or self.is_skipped() or self.is_abandoned()
 
     def _is_finished(self) -> bool:
-        """Indicate if the job is finished. This function will not trigger the persistence feature like is_finished().
+        """Indicate if the job is finished.
+
+        This function will not trigger the persistence feature unlike is_finished().
 
         Returns:
             True if the job is finished.
         """
         return self._status in [Status.COMPLETED, Status.FAILED, Status.CANCELED, Status.SKIPPED, Status.ABANDONED]
 
-    def _on_status_change(self, *functions):
+    def _on_status_change(self, *functions) -> None:
         """Get a notification when the status of the job changes.
 
         Job are assigned different statuses (_submitted_, _pending_, etc.) before being finished.
@@ -427,24 +427,13 @@ class Job(_Entity, _Labeled):
         Parameters:
             functions: Callables that will be called on each status change.
         """
-        functions = list(functions)
+        functions = list(functions)  # type: ignore
         function = functions.pop()
         self._subscribers.append(function)
 
         if functions:
             self._on_status_change(*functions)
 
-    def __hash__(self):
-        return hash(self.id)
-
-    def _unlock_edit_on_outputs(self):
-        for dn in self.task.output.values():
-            dn.unlock_edit()
-
-    @staticmethod
-    def _serialize_subscribers(subscribers: List) -> List:
-        return _fcts_to_dict(subscribers)
-
     def get_label(self) -> str:
         """Returns the job simple label prefixed by its owner label.
 
@@ -466,12 +455,23 @@ class Job(_Entity, _Labeled):
 
         Returns:
             A ReasonCollection object that can function as a Boolean value,
-            which is True if the job can be deleted. False otherwise.
+                which is True if the job can be deleted. False otherwise.
         """
         from ... import core as tp
 
         return tp.is_deletable(self)
 
+    def get_event_context(self):
+        return {"task_config_id": self._task.config_id}
+
+    def _unlock_edit_on_outputs(self) -> None:
+        for dn in self.task.output.values():
+            dn.unlock_edit()
+
+    @staticmethod
+    def _serialize_subscribers(subscribers: List) -> List:
+        return _fcts_to_dict(subscribers)
+
 
 @_make_event.register(Job)
 def _make_event_for_job(

+ 1 - 0
taipy/core/job/job_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 JobId = NewType("JobId", str)
+"""Type that holds a `Job^` identifier."""
 JobId.__doc__ = """Type that holds a `Job^` identifier."""

+ 3 - 5
taipy/core/notification/__init__.py

@@ -9,16 +9,14 @@
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
 
-"""
-Package for notifications about changes on `Orchestrator^` service entities.
-
+"""# Package for scenario management events.
 
-The Core service generates `Event^` objects to track changes on entities.
+The core package functionalities generate `Event^` objects to track changes on entities.
 These events are then relayed to a `Notifier^`, which handles the dispatch
 to consumers interested in specific event topics.
 
 To subscribe, a consumer needs to invoke the `Notifier.register()^` method.
-This call will yield a `RegistrationId^` and a dedicated event queue for
+This call will yield a `RegistrationId` and a dedicated event queue for
 receiving notifications.
 
 To handle notifications, an event consumer (e.g., the `CoreEventConsumerBase^`

+ 1 - 1
taipy/core/notification/_topic.py

@@ -41,7 +41,7 @@ class _Topic:
             raise InvalidEventOperation
         return operation
 
-    def __hash__(self):
+    def __hash__(self) -> int:
         return hash((self.entity_type, self.entity_id, self.operation, self.attribute_name))
 
     def __eq__(self, __value) -> bool:

+ 26 - 33
taipy/core/notification/core_event_consumer.py

@@ -23,41 +23,34 @@ class CoreEventConsumerBase(threading.Thread):
     It should be subclassed, and the `process_event` method should be implemented to define
     the custom logic for handling incoming events.
 
-    Subclasses should implement the `process_event` method to define their specific event handling behavior.
-
-    Example usage:
-
-    ```python
-    class MyEventConsumer(CoreEventConsumerBase):
-        def process_event(self, event: Event):
-            # Custom event processing logic here
-            print(f"Received event created at : {event.creation_date}")
-            pass
-
-    if __name__ == "__main__":
-        registration_id, registered_queue = Notifier.register(
-            entity_type=EventEntityType.SCENARIO,
-            operation=EventOperation.CREATION
-        )
-
-        consumer = MyEventConsumer(registration_id, registered_queue)
-        consumer.start()
-        # ...
-        consumer.stop()
-
-        Notifier.unregister(registration_id)
-    ```
-
-    Firstly, we would create a consumer class extending from CoreEventConsumerBase
-    and decide how to process the incoming events by defining the process_event.
-    Then, we would specify the type of event we want to receive by registering with the Notifier.
-    After that, we create an object of the consumer class by providing
-    the registration_id and registered_queue and start consuming the event.
-
-    Attributes:
-        queue (SimpleQueue): The queue from which events will be consumed.
+    ??? example "Basic usage"
 
+        ```python
+        class MyEventConsumer(CoreEventConsumerBase):
+            def process_event(self, event: Event):
+                # Custom event processing logic here
+                print(f"Received event created at : {event.creation_date}")
+                pass
 
+        if __name__ == "__main__":
+            registration_id, registered_queue = Notifier.register(
+                entity_type=EventEntityType.SCENARIO,
+                operation=EventOperation.CREATION
+            )
+
+            consumer = MyEventConsumer(registration_id, registered_queue)
+            consumer.start()
+            # ...
+            consumer.stop()
+
+            Notifier.unregister(registration_id)
+        ```
+
+        Firstly, we would create a consumer class extending from CoreEventConsumerBase
+        and decide how to process the incoming events by defining the process_event.
+        Then, we would specify the type of event we want to receive by registering with the Notifier.
+        After that, we create an object of the consumer class by providing
+        the registration_id and registered_queue and start consuming the event.
     """
 
     def __init__(self, registration_id: str, queue: SimpleQueue) -> None:

+ 27 - 16
taipy/core/notification/event.py

@@ -23,7 +23,13 @@ class EventOperation(_ReprEnum):
 
     `EventOperation` is used as an attribute of the `Event^` object to describe the
     operation performed on an entity.<br>
-    The possible operations are `CREATION`, `UPDATE`, `DELETION`, or `SUBMISSION`.
+    The possible operations are:
+
+     - `CREATION`: Event related to a creation operation.
+     - `UPDATE`: Event related to an update operation.
+     - `DELETION`: Event related to a deletion operation.
+     - `SUBMISSION`: Event related to a submission operation.
+
     """
 
     CREATION = 1
@@ -37,7 +43,15 @@ class EventEntityType(_ReprEnum):
 
     `EventEntityType` is used as an attribute of the `Event^` object to describe
     an entity that was changed.<br>
-    The possible operations are `CYCLE`, `SCENARIO`, `SEQUENCE`, `TASK`, `DATA_NODE`, `JOB` or `SUBMISSION`.
+    The possible operations are:
+
+    - `CYCLE`: Event related to a cycle entity.
+    - `SCENARIO`: Event related to a scenario entity.
+    - `SEQUENCE`: Event related to a sequence entity.
+    - `TASK`: Event related to a task entity.
+    - `DATA_NODE`: Event related to a data node entity.
+    - `JOB`: Event related to a job entity.
+    - `SUBMISSION`: Event related to a submission entity.
     """
 
     CYCLE = 1
@@ -69,32 +83,29 @@ _ENTITY_TO_EVENT_ENTITY_TYPE = {
 
 @dataclass(frozen=True)
 class Event:
-    """Event object used to notify any change in the Core service.
+    """Event object used to notify any change in a Taipy application.
 
     An event holds the necessary attributes to identify the change.
-
-    Attributes:
-        entity_type (EventEntityType^): Type of the entity that was changed (`DataNode^`,
-            `Scenario^`, `Cycle^`, etc. ).
-        entity_id (Optional[str]): Unique identifier of the entity that was changed.
-        operation (EventOperation^): Enum describing the operation (among `CREATION`, `UPDATE`, `DELETION`,
-            and `SUBMISSION`) that was performed on the entity.
-        attribute_name (Optional[str]): Name of the entity's attribute changed. Only relevant for `UPDATE`
-            operations
-        attribute_value (Optional[str]): Name of the entity's attribute changed. Only relevant for `UPDATE`
-            operations
-        metadata (dict): A dict of additional medata about the source of this event
-        creation_date (datetime): Date and time of the event creation.
     """
 
     entity_type: EventEntityType
+    """Type of the entity that was changed (`DataNode^`, `Scenario^`, `Cycle^`, etc. )."""
     operation: EventOperation
+    """Enum describing the operation that was performed on the entity.
+
+    The operation is among `CREATION`, `UPDATE`, `DELETION`, and `SUBMISSION`.
+    """
     entity_id: Optional[str] = None
+    """Unique identifier of the entity that was changed."""
     attribute_name: Optional[str] = None
+    """Name of the entity's attribute changed. Only relevant for `UPDATE` operations."""
     attribute_value: Optional[Any] = None
+    """Value of the entity's attribute changed. Only relevant for `UPDATE` operations."""
 
     metadata: dict = field(default_factory=dict)
+    """A dictionary of additional metadata about the source of this event."""
     creation_date: datetime = field(init=False)
+    """Date and time of the event creation."""
 
     def __post_init__(self):
         # Creation date

+ 21 - 21
taipy/core/notification/notifier.py

@@ -51,7 +51,7 @@ def _publish_event(
 
 
 class Notifier:
-    """A class for managing event registrations and publishing `Orchestrator^` service events."""
+    """A class for managing event registrations and publishing a Taipy application events."""
 
     _topics_registrations_list: Dict[_Topic, Set[_Registration]] = {}
 
@@ -75,14 +75,14 @@ class Notifier:
         - A Scenario deletion
         - Job failures
 
-        Example usage:
+        !!! example "Standard usage"
 
-        ```python
-        registration_id, registered_queue = Notifier.register(
-            entity_type=EventEntityType.SCENARIO,
-            operation=EventOperation.CREATION
-        )
-        ```
+            ```python
+            registration_id, registered_queue = Notifier.register(
+                entity_type=EventEntityType.SCENARIO,
+                operation=EventOperation.CREATION
+            )
+            ```
 
         Parameters:
             entity_type (Optional[EventEntityType^]): If provided, the listener will
@@ -135,20 +135,20 @@ class Notifier:
     def unregister(cls, registration_id: str) -> None:
         """Unregister a listener.
 
-        Example usage:
+        !!! example "Standard usage"
 
-        ```python
-        registration_id, registered_queue = Notifier.register(
-            entity_type=EventEntityType.CYCLE,
-            entity_id="CYCLE_cycle_1",
-            operation=EventOperation.CREATION
-        )
+            ```python
+            registration_id, registered_queue = Notifier.register(
+                entity_type=EventEntityType.CYCLE,
+                entity_id="CYCLE_cycle_1",
+                operation=EventOperation.CREATION
+            )
 
-        Notifier.unregister(registration_id)
-        ```
+            Notifier.unregister(registration_id)
+            ```
 
         Parameters:
-            registration_id (RegistrationId^): The registration id returned by the `register` method.
+            registration_id (`RegistrationId`): The registration id returned by the `register` method.
         """
         to_remove_registration: Optional[_Registration] = None
 
@@ -165,11 +165,11 @@ class Notifier:
                 del cls._topics_registrations_list[to_remove_registration.topic]
 
     @classmethod
-    def publish(cls, event) -> None:
-        """Publish a `Orchestrator^` service event to all registered listeners whose topic matches the event.
+    def publish(cls, event: Event) -> None:
+        """Publish a Taipy application event to all registered listeners whose topic matches the event.
 
         Parameters:
-            event (Event^): The event to publish.
+            event (`Event^`): The event to publish.
         """
         for topic, registrations in cls._topics_registrations_list.items():
             if Notifier._is_matching(event, topic):

+ 1 - 0
taipy/core/notification/registration_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 RegistrationId = NewType("RegistrationId", str)
+"""Registration identifier. It can be used to instantiate a `CoreEventConsumerBase^`."""
 RegistrationId.__doc__ = """Registration identifier. It can be used to instantiate a `CoreEventConsumerBase^`."""

+ 14 - 14
taipy/core/orchestrator.py

@@ -25,8 +25,14 @@ from .exceptions.exceptions import OrchestratorServiceIsAlreadyRunning
 
 
 class Orchestrator:
-    """
-    Orchestrator service
+    """ The Taipy Orchestrator service.
+
+    When run, the Orchestrator starts a job dispatcher which is responsible for
+    dispatching the submitted jobs to an available executor for their execution.
+
+    !!! Note "Configuration update"
+        The Orchestrator service blocks the Config from updates while running.
+
     """
 
     _is_running = False
@@ -41,14 +47,11 @@ class Orchestrator:
     _dispatcher: Optional[_JobDispatcher] = None
 
     def __init__(self) -> None:
-        """
-        Initialize an Orchestrator service.
-        """
+        """Initialize an Orchestrator service."""
         pass
 
-    def run(self, force_restart=False):
-        """
-        Start an Orchestrator service.
+    def run(self, force_restart=False) -> None:
+        """ Start the Orchestrator service.
 
         This function checks and locks the configuration, manages application's version,
         and starts a job dispatcher.
@@ -63,9 +66,8 @@ class Orchestrator:
         self.__start_dispatcher(force_restart)
         self.__logger.info("Orchestrator service has been started.")
 
-    def stop(self, wait: bool = True, timeout: Optional[float] = None):
-        """
-        Stop the Orchestrator service.
+    def stop(self, wait: bool = True, timeout: Optional[float] = None) -> None:
+        """Stop the Orchestrator service.
         This function stops the dispatcher and unblock the Config for update.
 
         Parameters:
@@ -86,9 +88,7 @@ class Orchestrator:
 
     @classmethod
     def _manage_version_and_block_config(cls):
-        """
-        Manage the application's version and block the Config from updates.
-        """
+        """Manage the application's version and block the Config from updates."""
         if cls._version_is_initialized:
             return
 

+ 3 - 1
taipy/core/reason/__init__.py

@@ -8,7 +8,9 @@
 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
-"""Reasons for the Taipy actions why they can't be performed.
+""" # Package for managing reasons why some Taipy operations are not allowed.
+
+Reasons for the Taipy actions why they can't be performed.
 
 Because Taipy applications are natively multiuser, asynchronous, and dynamic,
 some functions should not be invoked in some specific contexts. You can protect

+ 1 - 1
taipy/core/reason/reason.py

@@ -15,7 +15,7 @@ from typing import Any, Optional
 class Reason:
     """A reason explains why a specific action cannot be performed.
 
-    This is a parent class aiming at being implemented by specific sub-classes.
+    This is a parent class aiming at being implemented by specific subclasses.
 
     Because Taipy applications are natively multiuser, asynchronous, and dynamic,
     some functions might not be called in some specific contexts. You can protect

+ 3 - 3
taipy/core/reason/reason_collection.py

@@ -15,14 +15,14 @@ from .reason import Reason
 
 
 class ReasonCollection:
-    """This class is used to store all the reasons to explain why some Taipy operations are not allowed.
+    """Class used to store all the reasons to explain why some Taipy operations are not allowed.
 
     Because Taipy applications are natively multiuser, asynchronous, and dynamic,
     some functions might not be called in some specific contexts. You can protect
     such calls by calling other methods that return a `ReasonCollection`. It acts like a
     boolean: True if the operation can be performed and False otherwise.
-    If the action cannot be performed, the ReasonCollection holds all the individual reasons as a list
-    of `Reason` objects. Each `Reason` explains why the operation cannot be performed.
+    If the action cannot be performed, the ReasonCollection holds all the individual reasons as
+    a list of `Reason` objects. Each `Reason` explains why the operation cannot be performed.
     """
 
     def __init__(self) -> None:

+ 325 - 301
taipy/core/scenario/scenario.py

@@ -63,7 +63,7 @@ class Scenario(_Entity, Submittable, _Labeled):
         It is not recommended to instantiate a `Scenario` directly. Instead, it should be
         created with the `create_scenario()^` function.
 
-    !!! Example
+    ??? Example
 
         ```python
         import taipy as tp
@@ -92,22 +92,6 @@ class Scenario(_Entity, Submittable, _Labeled):
             # Retrieve all scenarios
             all_scenarios = tp.get_scenarios()
         ```
-
-    Attributes:
-        config_id (str): The identifier of the `ScenarioConfig^`.
-        tasks (Set[Task^]): The set of tasks.
-        additional_data_nodes (Set[DataNode^]): The set of additional data nodes.
-        sequences (Dict[str, Sequence^]): The dictionary of sequences: subsets of tasks that can be submitted
-            together independently of the rest of the scenario's tasks.
-        properties (dict[str, Any]): A dictionary of additional properties.
-        scenario_id (str): The unique identifier of this scenario.
-        creation_date (datetime): The date and time of the scenario's creation.
-        is_primary (bool): True if the scenario is the primary of its cycle. False otherwise.
-        cycle (Cycle^): The cycle of the scenario.
-        subscribers (List[Callable]): The list of callbacks to be called on `Job^`'s status change.
-        tags (Set[str]): The list of scenario's tags.
-        version (str): The string indicates the application version of the scenario to instantiate.
-            If not provided, the latest version is used.
     """
 
     _ID_PREFIX = "SCENARIO"
@@ -119,6 +103,9 @@ class Scenario(_Entity, Submittable, _Labeled):
     _SEQUENCE_SUBSCRIBERS_KEY = "subscribers"
     __CHECK_INIT_DONE_ATTR_NAME = "_init_done"
 
+    id: ScenarioId
+    """The unique identifier of this scenario."""
+
     def __init__(
         self,
         config_id: str,
@@ -158,24 +145,21 @@ class Scenario(_Entity, Submittable, _Labeled):
         self._version = version or _VersionManagerFactory._build_manager()._get_latest_version()
         self._init_done = True
 
-    @staticmethod
-    def _new_id(config_id: str) -> ScenarioId:
-        """Generate a unique scenario identifier."""
-        return ScenarioId(Scenario.__SEPARATOR.join([Scenario._ID_PREFIX, _validate_id(config_id), str(uuid.uuid4())]))
-
     def __getstate__(self):
         return self.id
 
-    def __setstate__(self, id):
+    def __setstate__(self, id) -> None:
         from ... import core as tp
 
         sc = tp.get(id)
         self.__dict__ = sc.__dict__
 
-    def __hash__(self):
+    def __hash__(self) -> int:
+        """Return the hash of the scenario."""
         return hash(self.id)
 
-    def __eq__(self, other):
+    def __eq__(self, other) -> bool:
+        """Check if the scenario is equal to another scenario."""
         return isinstance(other, Scenario) and self.id == other.id
 
     def __setattr__(self, name: str, value: Any) -> None:
@@ -188,7 +172,20 @@ class Scenario(_Entity, Submittable, _Labeled):
             except AttributeError:
                 return super().__setattr__(name, value)
 
-    def __getattr__(self, attribute_name) -> Union[Sequence, Task, DataNode]:
+    def __getattr__(self, attribute_name: str) -> Union[Sequence, Task, DataNode]:
+        """Get a scenario attribute by its name.
+
+        The attribute can be a sequence, a task, or a data node.
+
+        Parameters:
+            attribute_name (str): The name of the attribute to get.
+
+        Returns:
+            The attribute with the given name.
+
+        Raises:
+            AttributeError: If the attribute is not found.
+        """
         protected_attribute_name = _validate_id(attribute_name)
         sequences = self._get_sequences()
         if protected_attribute_name in sequences:
@@ -203,351 +200,143 @@ class Scenario(_Entity, Submittable, _Labeled):
         raise AttributeError(f"{attribute_name} is not an attribute of scenario {self.id}")
 
     @property
-    def config_id(self):
+    def config_id(self) -> str:
+        """The identifier of the `ScenarioConfig^`."""
         return self._config_id
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def sequences(self) -> Dict[str, Sequence]:
+        """The dictionary of the scenario's sequences.
+
+        The sequences are subsets of tasks that can be submitted together independently of
+        the rest of the scenario's tasks."""
         return self._get_sequences()
 
     @sequences.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
     def sequences(
         self, sequences: Dict[str, Dict[str, Union[List[Task], List[TaskId], _ListAttributes, List[_Subscriber], Dict]]]
-    ):
+    ) -> None:
         self._sequences = sequences
         actual_sequences = self._get_sequences()
         for sequence_name in sequences.keys():
             if not actual_sequences[sequence_name]._is_consistent():
                 raise InvalidSequence(actual_sequences[sequence_name].id)
 
-    def add_sequence(
-        self,
-        name: str,
-        tasks: Union[List[Task], List[TaskId]],
-        properties: Optional[Dict] = None,
-        subscribers: Optional[List[_Subscriber]] = None,
-    ):
-        """Add a sequence to the scenario.
-
-        Parameters:
-            name (str): The name of the sequence.
-            tasks (Union[List[Task], List[TaskId]]): The list of scenario's tasks to add to the sequence.
-            properties (Optional[Dict]): The optional properties of the sequence.
-            subscribers (Optional[List[_Subscriber]]): The optional list of callbacks to be called on
-                `Job^`'s status change.
-
-        Raises:
-            SequenceTaskDoesNotExistInScenario^: If a task in the sequence does not exist in the scenario.
-            SequenceAlreadyExists^: If a sequence with the same name already exists in the scenario.
-        """
-        if name in self.sequences:
-            raise SequenceAlreadyExists(name, self.id)
-        seq = self._set_sequence(name, tasks, properties, subscribers)
-        Notifier.publish(_make_event(seq, EventOperation.CREATION))
-
-    def update_sequence(
-        self,
-        name: str,
-        tasks: Union[List[Task], List[TaskId]],
-        properties: Optional[Dict] = None,
-        subscribers: Optional[List[_Subscriber]] = None,
-    ):
-        """Update an existing sequence.
-
-        Parameters:
-            name (str): The name of the sequence to update.
-            tasks (Union[List[Task], List[TaskId]]): The new list of scenario's tasks.
-            properties (Optional[Dict]): The new properties of the sequence.
-            subscribers (Optional[List[_Subscriber]]): The new list of callbacks to be called on `Job^`'s status change.
-
-        Raises:
-            SequenceTaskDoesNotExistInScenario^: If a task in the list does not exist in the scenario.
-            SequenceAlreadyExists^: If a sequence with the same name already exists in the scenario.
-        """
-        if name not in self.sequences:
-            raise NonExistingSequence(name, self.id)
-        seq = self._set_sequence(name, tasks, properties, subscribers)
-        Notifier.publish(_make_event(seq, EventOperation.UPDATE))
-
-    def _set_sequence(
-        self,
-        name: str,
-        tasks: Union[List[Task], List[TaskId]],
-        properties: Optional[Dict] = None,
-        subscribers: Optional[List[_Subscriber]] = None,
-    ) -> Sequence:
-        _scenario = _Reloader()._reload(self._MANAGER_NAME, self)
-        _scenario_task_ids = {task.id if isinstance(task, Task) else task for task in _scenario._tasks}
-        _sequence_task_ids: Set[TaskId] = {task.id if isinstance(task, Task) else task for task in tasks}
-        self.__check_sequence_tasks_exist_in_scenario_tasks(name, _sequence_task_ids, self.id, _scenario_task_ids)
-
-        from taipy.core.sequence._sequence_manager_factory import _SequenceManagerFactory
-
-        seq_manager = _SequenceManagerFactory._build_manager()
-        seq = seq_manager._create(name, tasks, subscribers or [], properties or {}, self.id, self.version)
-
-        _sequences = _Reloader()._reload(self._MANAGER_NAME, self)._sequences
-        _sequences.update(
-            {
-                name: {
-                    self._SEQUENCE_TASKS_KEY: tasks,
-                    self._SEQUENCE_PROPERTIES_KEY: properties or {},
-                    self._SEQUENCE_SUBSCRIBERS_KEY: subscribers or [],
-                }
-            }
-        )
-        self.sequences = _sequences  # type: ignore
-        return seq
-
-    def add_sequences(self, sequences: Dict[str, Union[List[Task], List[TaskId]]]):
-        """Add multiple sequences to the scenario.
-
-        Note:
-            To provide properties and subscribers for the sequences, use `Scenario.add_sequence^` instead.
-
-        Parameters:
-            sequences (Dict[str, Union[List[Task], List[TaskId]]]):
-                A dictionary containing sequences to add. Each key is a sequence name, and the value must
-                be a list of the scenario tasks.
-
-        Raises:
-            SequenceTaskDoesNotExistInScenario^: If a task in the sequence does not exist in the scenario.
-        """
-        _scenario = _Reloader()._reload(self._MANAGER_NAME, self)
-        _sc_task_ids = {task.id if isinstance(task, Task) else task for task in _scenario._tasks}
-        for name, tasks in sequences.items():
-            _seq_task_ids: Set[TaskId] = {task.id if isinstance(task, Task) else task for task in tasks}
-            self.__check_sequence_tasks_exist_in_scenario_tasks(name, _seq_task_ids, self.id, _sc_task_ids)
-        # Need to parse twice the sequences to avoid adding some sequences and not others in case of exception
-        for name, tasks in sequences.items():
-            self.add_sequence(name, tasks)
-
-    def remove_sequence(self, name: str):
-        """Remove a sequence from the scenario.
-
-        Parameters:
-            name (str): The name of the sequence to remove.
-        """
-        seq_id = self.sequences[name].id
-        _sequences = _Reloader()._reload(self._MANAGER_NAME, self)._sequences
-        _sequences.pop(name)
-        self.sequences = _sequences  # type: ignore
-        Notifier.publish(Event(EventEntityType.SEQUENCE, EventOperation.DELETION, entity_id=seq_id))
-
-    def remove_sequences(self, sequence_names: List[str]):
-        """
-        Remove multiple sequences from the scenario.
-
-        Parameters:
-            sequence_names (List[str]): A list of sequence names to remove.
-        """
-        _sequences = _Reloader()._reload(self._MANAGER_NAME, self)._sequences
-        for sequence_name in sequence_names:
-            seq_id = self.sequences[sequence_name].id
-            _sequences.pop(sequence_name)
-            Notifier.publish(
-                Event(
-                    EventEntityType.SEQUENCE,
-                    EventOperation.DELETION,
-                    entity_id=seq_id,
-                )
-            )
-        self.sequences = _sequences  # type: ignore
-
-    def rename_sequence(self, old_name, new_name):
-        """Rename a sequence of the scenario.
-
-        Parameters:
-            old_name (str): The current name of the sequence to rename.
-            new_name (str): The new name of the sequence.
-
-        Raises:
-            SequenceAlreadyExists^: If a sequence with the same name already exists in the scenario.
-        """
-        if old_name == new_name:
-            return
-        if new_name in self.sequences:
-            raise SequenceAlreadyExists(new_name, self.id)
-        self._sequences[new_name] = self._sequences[old_name]
-        del self._sequences[old_name]
-        self.sequences = self._sequences  # type: ignore
-        Notifier.publish(
-            Event(
-                EventEntityType.SCENARIO,
-                EventOperation.UPDATE,
-                entity_id=self.id,
-                attribute_name="sequences",
-                attribute_value=self._sequences,
-            )
-        )
-
-    @staticmethod
-    def __check_sequence_tasks_exist_in_scenario_tasks(
-        sequence_name: str, sequence_task_ids: Set[TaskId], scenario_id: ScenarioId, scenario_task_ids: Set[TaskId]
-    ):
-        non_existing_sequence_task_ids_in_scenario = set()
-        for sequence_task_id in sequence_task_ids:
-            if sequence_task_id not in scenario_task_ids:
-                non_existing_sequence_task_ids_in_scenario.add(sequence_task_id)
-        if len(non_existing_sequence_task_ids_in_scenario) > 0:
-            raise SequenceTaskDoesNotExistInScenario(
-                list(non_existing_sequence_task_ids_in_scenario), sequence_name, scenario_id
-            )
-
-    def _get_sequences(self) -> Dict[str, Sequence]:
-        _sequences = {}
-
-        from ..sequence._sequence_manager_factory import _SequenceManagerFactory
-
-        sequence_manager = _SequenceManagerFactory._build_manager()
-
-        for sequence_name, sequence_data in self._sequences.items():
-            sequence = sequence_manager._build_sequence(
-                sequence_name,
-                sequence_data.get(self._SEQUENCE_TASKS_KEY, []),
-                sequence_data.get(self._SEQUENCE_SUBSCRIBERS_KEY, []),
-                sequence_data.get(self._SEQUENCE_PROPERTIES_KEY, {}),
-                self.id,
-                self.version,
-            )
-            if not isinstance(sequence, Sequence):
-                raise NonExistingSequence(sequence_name, self.id)
-            _sequences[sequence_name] = sequence
-        return _sequences
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def tasks(self) -> Dict[str, Task]:
+        """The dictionary of the scenario's tasks."""
         return self.__get_tasks()
 
-    def __get_tasks(self) -> Dict[str, Task]:
-        from ..task._task_manager_factory import _TaskManagerFactory
-
-        _tasks = {}
-        task_manager = _TaskManagerFactory._build_manager()
-
-        for task_or_id in self._tasks:
-            t = task_manager._get(task_or_id, task_or_id)
-
-            if not isinstance(t, Task):
-                raise NonExistingTask(task_or_id)
-            _tasks[t.config_id] = t
-        return _tasks
-
     @tasks.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def tasks(self, val: Union[Set[TaskId], Set[Task]]):
+    def tasks(self, val: Union[Set[TaskId], Set[Task]]) -> None:
         self._tasks = set(val)
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def additional_data_nodes(self) -> Dict[str, DataNode]:
-        return self.__get_additional_data_nodes()
+        """The dictionary of the scenario's additional data nodes.
 
-    def __get_additional_data_nodes(self):
-        from ..data._data_manager_factory import _DataManagerFactory
-
-        additional_data_nodes = {}
-        data_manager = _DataManagerFactory._build_manager()
-
-        for dn_or_id in self._additional_data_nodes:
-            dn = data_manager._get(dn_or_id, dn_or_id)
-
-            if not isinstance(dn, DataNode):
-                raise NonExistingDataNode(dn_or_id)
-            additional_data_nodes[dn.config_id] = dn
-        return additional_data_nodes
+        Additional data nodes are data nodes that are not part of the
+        scenario's graph but are used to store extra data. They are not
+        connected to the scenario's tasks."""
+        return self.__get_additional_data_nodes()
 
     @additional_data_nodes.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def additional_data_nodes(self, val: Union[Set[TaskId], Set[DataNode]]):
+    def additional_data_nodes(self, val: Union[Set[TaskId], Set[DataNode]]) -> None:
         self._additional_data_nodes = set(val)
 
-    def _get_set_of_tasks(self) -> Set[Task]:
-        return set(self.tasks.values())
-
-    def __get_data_nodes(self) -> Dict[str, DataNode]:
-        data_nodes_dict = self.__get_additional_data_nodes()
-        for _, task in self.__get_tasks().items():
-            data_nodes_dict.update(task.data_nodes)
-        return data_nodes_dict
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def data_nodes(self) -> Dict[str, DataNode]:
+        """The dictionary of the scenario's data nodes."""
         return self.__get_data_nodes()
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def creation_date(self):
+    def creation_date(self) -> datetime:
+        """The date and time of the scenario's creation."""
         return self._creation_date
 
     @creation_date.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def creation_date(self, val):
+    def creation_date(self, val) -> None:
         self._creation_date = val
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def cycle(self):
+    def cycle(self) -> Optional[Cycle]:
+        """The cycle of the scenario"""
         return self._cycle
 
     @cycle.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def cycle(self, val):
+    def cycle(self, val) -> None:
         self._cycle = val
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def is_primary(self):
+    def is_primary(self) -> bool:
+        """True if the scenario is the primary of its cycle. False otherwise."""
         return self._primary_scenario
 
     @is_primary.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def is_primary(self, val):
+    def is_primary(self, val) -> None:
         self._primary_scenario = val
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def subscribers(self):
+    def subscribers(self) -> _ListAttributes:
+        """The list of callbacks to be called on `Job^`'s status change."""
         return self._subscribers
 
     @subscribers.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def subscribers(self, val):
+    def subscribers(self, val) -> None:
         self._subscribers = _ListAttributes(self, val)
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def tags(self):
+    def tags(self) -> Set[str]:
+        """The set of scenario's tags."""
         return self._tags
 
     @tags.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def tags(self, val):
+    def tags(self, val) -> None:
         self._tags = val or set()
 
     @property
-    def version(self):
+    def version(self) -> str:
+        """The application version of the scenario.
+
+        The string indicates the application version of the scenario. If not
+        provided, the latest version is used."""
         return self._version
 
     @property
-    def owner_id(self):
-        return self._cycle.id
+    def owner_id(self) -> Optional[str]:
+        """The identifier of the scenario cycle."""
+        return self._cycle.id if self._cycle else None
 
     @property
-    def properties(self):
+    def properties(self) -> _Properties:
+        """The dictionary of additional properties."""
         self._properties = _Reloader()._reload(self._MANAGER_NAME, self)._properties
         return self._properties
 
     @property  # type: ignore
     def name(self) -> Optional[str]:
+        """The human-readable name of the scenario."""
         return self.properties.get("name")
 
     @name.setter  # type: ignore
-    def name(self, val):
+    def name(self, val) -> None:
         self.properties["name"] = val
 
     def has_tag(self, tag: str) -> bool:
@@ -555,50 +344,38 @@ class Scenario(_Entity, Submittable, _Labeled):
 
         Parameters:
             tag (str): The tag to search among the set of scenario's tags.
+
         Returns:
             True if the scenario has the tag given as parameter. False otherwise.
         """
         return tag in self.tags
 
-    def _add_tag(self, tag: str):
-        self._tags = _Reloader()._reload("scenario", self)._tags
-        self._tags.add(tag)
-
-    def _remove_tag(self, tag: str):
-        self._tags = _Reloader()._reload("scenario", self)._tags
-        if self.has_tag(tag):
-            self._tags.remove(tag)
-
-    def subscribe(
-        self,
-        callback: Callable[[Scenario, Job], None],
-        params: Optional[List[Any]] = None,
-    ):
+    def subscribe(self, callback: Callable[[Scenario, Job], None], params: Optional[List[Any]] = None) -> None:
         """Subscribe a function to be called on `Job^` status change.
 
         The subscription is applied to all jobs created from the scenario's execution.
 
+        Note:
+            Notification will be available only for jobs created after this subscription.
+
         Parameters:
             callback (Callable[[Scenario^, Job^], None]): The callable function to be called
                 on status change.
             params (Optional[List[Any]]): The parameters to be passed to the _callback_.
-
-        Note:
-            Notification will be available only for jobs created after this subscription.
         """
         from ... import core as tp
 
         return tp.subscribe_scenario(callback, params, self)
 
-    def unsubscribe(self, callback: Callable[[Scenario, Job], None], params: Optional[List[Any]] = None):
+    def unsubscribe(self, callback: Callable[[Scenario, Job], None], params: Optional[List[Any]] = None) -> None:
         """Unsubscribe a function that is called when the status of a `Job^` changes.
 
+        Note:
+            The function will continue to be called for ongoing jobs.
+
         Parameters:
             callback (Callable[[Scenario^, Job^], None]): The callable function to unsubscribe.
             params (Optional[List[Any]]): The parameters to be passed to the _callback_.
-
-        Note:
-            The function will continue to be called for ongoing jobs.
         """
         from ... import core as tp
 
@@ -626,6 +403,7 @@ class Scenario(_Entity, Submittable, _Labeled):
                 before returning.<br/>
                 If not provided and *wait* is True, the function waits indefinitely.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             A `Submission^` containing the information of the submission.
         """
@@ -633,7 +411,7 @@ class Scenario(_Entity, Submittable, _Labeled):
 
         return _ScenarioManagerFactory._build_manager()._submit(self, callbacks, force, wait, timeout, **properties)
 
-    def set_primary(self):
+    def set_primary(self) -> None:
         """Promote the scenario as the primary scenario of its cycle.
 
         If the cycle already has a primary scenario, it will be demoted, and it will no longer
@@ -643,7 +421,7 @@ class Scenario(_Entity, Submittable, _Labeled):
 
         return tp.set_primary(self)
 
-    def add_tag(self, tag: str):
+    def add_tag(self, tag: str) -> None:
         """Add a tag to this scenario.
 
         If the scenario's cycle already have another scenario tagged with _tag_ the other
@@ -656,7 +434,7 @@ class Scenario(_Entity, Submittable, _Labeled):
 
         return tp.tag(self, tag)
 
-    def remove_tag(self, tag: str):
+    def remove_tag(self, tag: str) -> None:
         """Remove a tag from this scenario.
 
         Parameters:
@@ -692,7 +470,143 @@ class Scenario(_Entity, Submittable, _Labeled):
         """
         return self._get_simple_label()
 
+    def add_sequence(
+        self,
+        name: str,
+        tasks: Union[List[Task], List[TaskId]],
+        properties: Optional[Dict] = None,
+        subscribers: Optional[List[_Subscriber]] = None,
+    ) -> None:
+        """Add a sequence to the scenario.
+
+        Parameters:
+            name (str): The name of the sequence.
+            tasks (Union[List[Task], List[TaskId]]): The list of scenario's tasks to add to the sequence.
+            properties (Optional[Dict]): The optional properties of the sequence.
+            subscribers (Optional[List[_Subscriber]]): The optional list of callbacks to be called on
+                `Job^`'s status change.
+
+        Raises:
+            SequenceTaskDoesNotExistInScenario^: If a task in the sequence does not exist in the scenario.
+            SequenceAlreadyExists^: If a sequence with the same name already exists in the scenario.
+        """
+        if name in self.sequences:
+            raise SequenceAlreadyExists(name, self.id)
+        seq = self._set_sequence(name, tasks, properties, subscribers)
+        Notifier.publish(_make_event(seq, EventOperation.CREATION))
+
+    def update_sequence(
+        self,
+        name: str,
+        tasks: Union[List[Task], List[TaskId]],
+        properties: Optional[Dict] = None,
+        subscribers: Optional[List[_Subscriber]] = None,
+    ) -> None:
+        """Update an existing sequence.
+
+        Parameters:
+            name (str): The name of the sequence to update.
+            tasks (Union[List[Task], List[TaskId]]): The new list of scenario's tasks.
+            properties (Optional[Dict]): The new properties of the sequence.
+            subscribers (Optional[List[_Subscriber]]): The new list of callbacks to be called on `Job^`'s status change.
+
+        Raises:
+            SequenceTaskDoesNotExistInScenario^: If a task in the list does not exist in the scenario.
+            SequenceAlreadyExists^: If a sequence with the same name already exists in the scenario.
+        """
+        if name not in self.sequences:
+            raise NonExistingSequence(name, self.id)
+        seq = self._set_sequence(name, tasks, properties, subscribers)
+        Notifier.publish(_make_event(seq, EventOperation.UPDATE))
+
+    def add_sequences(self, sequences: Dict[str, Union[List[Task], List[TaskId]]]) -> None:
+        """Add multiple sequences to the scenario.
+
+        Note:
+            To provide properties and subscribers for the sequences, use `Scenario.add_sequence^` instead.
+
+        Parameters:
+            sequences (Dict[str, Union[List[Task], List[TaskId]]]):
+                A dictionary containing sequences to add. Each key is a sequence name, and the value must
+                be a list of the scenario tasks.
+
+        Raises:
+            SequenceTaskDoesNotExistInScenario^: If a task in the sequence does not exist in the scenario.
+        """
+        _scenario = _Reloader()._reload(self._MANAGER_NAME, self)
+        _sc_task_ids = {task.id if isinstance(task, Task) else task for task in _scenario._tasks}
+        for name, tasks in sequences.items():
+            _seq_task_ids: Set[TaskId] = {task.id if isinstance(task, Task) else task for task in tasks}
+            self.__check_sequence_tasks_exist_in_scenario_tasks(name, _seq_task_ids, self.id, _sc_task_ids)
+        # Need to parse twice the sequences to avoid adding some sequences and not others in case of exception
+        for name, tasks in sequences.items():
+            self.add_sequence(name, tasks)
+
+    def remove_sequence(self, name: str) -> None:
+        """Remove a sequence from the scenario.
+
+        Parameters:
+            name (str): The name of the sequence to remove.
+        """
+        seq_id = self.sequences[name].id
+        _sequences = _Reloader()._reload(self._MANAGER_NAME, self)._sequences
+        _sequences.pop(name)
+        self.sequences = _sequences  # type: ignore
+        Notifier.publish(Event(EventEntityType.SEQUENCE, EventOperation.DELETION, entity_id=seq_id))
+
+    def remove_sequences(self, sequence_names: List[str]) -> None:
+        """Remove multiple sequences from the scenario.
+
+        Parameters:
+            sequence_names (List[str]): A list of sequence names to remove.
+        """
+        _sequences = _Reloader()._reload(self._MANAGER_NAME, self)._sequences
+        for sequence_name in sequence_names:
+            seq_id = self.sequences[sequence_name].id
+            _sequences.pop(sequence_name)
+            Notifier.publish(
+                Event(
+                    EventEntityType.SEQUENCE,
+                    EventOperation.DELETION,
+                    entity_id=seq_id,
+                )
+            )
+        self.sequences = _sequences  # type: ignore
+
+    def rename_sequence(self, old_name, new_name) -> None:
+        """Rename a scenario sequence.
+
+        Parameters:
+            old_name (str): The current name of the sequence to rename.
+            new_name (str): The new name of the sequence.
+
+        Raises:
+            SequenceAlreadyExists^: If a sequence with the same name already exists in the scenario.
+        """
+        if old_name == new_name:
+            return
+        if new_name in self.sequences:
+            raise SequenceAlreadyExists(new_name, self.id)
+        self._sequences[new_name] = self._sequences[old_name]
+        del self._sequences[old_name]
+        self.sequences = self._sequences  # type: ignore
+        Notifier.publish(
+            Event(
+                EventEntityType.SCENARIO,
+                EventOperation.UPDATE,
+                entity_id=self.id,
+                attribute_name="sequences",
+                attribute_value=self._sequences,
+            )
+        )
+
+    @staticmethod
+    def _new_id(config_id: str) -> ScenarioId:
+        """Generate a unique scenario identifier."""
+        return ScenarioId(Scenario.__SEPARATOR.join([Scenario._ID_PREFIX, _validate_id(config_id), str(uuid.uuid4())]))
+
     def _is_consistent(self) -> bool:
+        """Check if the scenario is consistent."""
         dag = self._build_dag()
         if dag.number_of_nodes() == 0:
             return True
@@ -706,6 +620,116 @@ class Scenario(_Entity, Submittable, _Labeled):
             return False
         return True
 
+    def _add_tag(self, tag: str) -> None:
+        self._tags = _Reloader()._reload("scenario", self)._tags
+        self._tags.add(tag)
+
+    def _remove_tag(self, tag: str) -> None:
+        self._tags = _Reloader()._reload("scenario", self)._tags
+        if self.has_tag(tag):
+            self._tags.remove(tag)
+
+    def _get_set_of_tasks(self) -> Set[Task]:
+        return set(self.tasks.values())
+
+    def __get_data_nodes(self) -> Dict[str, DataNode]:
+        data_nodes_dict = self.__get_additional_data_nodes()
+        for _, task in self.__get_tasks().items():
+            data_nodes_dict.update(task.data_nodes)
+        return data_nodes_dict
+
+    def __get_additional_data_nodes(self):
+        from ..data._data_manager_factory import _DataManagerFactory
+
+        additional_data_nodes = {}
+        data_manager = _DataManagerFactory._build_manager()
+
+        for dn_or_id in self._additional_data_nodes:
+            dn = data_manager._get(dn_or_id, dn_or_id)
+
+            if not isinstance(dn, DataNode):
+                raise NonExistingDataNode(dn_or_id)
+            additional_data_nodes[dn.config_id] = dn
+        return additional_data_nodes
+
+    def __get_tasks(self) -> Dict[str, Task]:
+        from ..task._task_manager_factory import _TaskManagerFactory
+
+        _tasks = {}
+        task_manager = _TaskManagerFactory._build_manager()
+
+        for task_or_id in self._tasks:
+            t = task_manager._get(task_or_id, task_or_id)
+
+            if not isinstance(t, Task):
+                raise NonExistingTask(task_or_id)
+            _tasks[t.config_id] = t
+        return _tasks
+
+    @staticmethod
+    def __check_sequence_tasks_exist_in_scenario_tasks(
+        sequence_name: str, sequence_task_ids: Set[TaskId], scenario_id: ScenarioId, scenario_task_ids: Set[TaskId]
+    ):
+        non_existing_sequence_task_ids_in_scenario = set()
+        for sequence_task_id in sequence_task_ids:
+            if sequence_task_id not in scenario_task_ids:
+                non_existing_sequence_task_ids_in_scenario.add(sequence_task_id)
+        if len(non_existing_sequence_task_ids_in_scenario) > 0:
+            raise SequenceTaskDoesNotExistInScenario(
+                list(non_existing_sequence_task_ids_in_scenario), sequence_name, scenario_id
+            )
+
+    def _get_sequences(self) -> Dict[str, Sequence]:
+        _sequences = {}
+
+        from ..sequence._sequence_manager_factory import _SequenceManagerFactory
+
+        sequence_manager = _SequenceManagerFactory._build_manager()
+
+        for sequence_name, sequence_data in self._sequences.items():
+            sequence = sequence_manager._build_sequence(
+                sequence_name,
+                sequence_data.get(self._SEQUENCE_TASKS_KEY, []),
+                sequence_data.get(self._SEQUENCE_SUBSCRIBERS_KEY, []),
+                sequence_data.get(self._SEQUENCE_PROPERTIES_KEY, {}),
+                self.id,
+                self.version,
+            )
+            if not isinstance(sequence, Sequence):
+                raise NonExistingSequence(sequence_name, self.id)
+            _sequences[sequence_name] = sequence
+        return _sequences
+
+    def _set_sequence(
+        self,
+        name: str,
+        tasks: Union[List[Task], List[TaskId]],
+        properties: Optional[Dict] = None,
+        subscribers: Optional[List[_Subscriber]] = None,
+    ) -> Sequence:
+        _scenario = _Reloader()._reload(self._MANAGER_NAME, self)
+        _scenario_task_ids = {task.id if isinstance(task, Task) else task for task in _scenario._tasks}
+        _sequence_task_ids: Set[TaskId] = {task.id if isinstance(task, Task) else task for task in tasks}
+        self.__check_sequence_tasks_exist_in_scenario_tasks(name, _sequence_task_ids, self.id, _scenario_task_ids)
+
+        from taipy.core.sequence._sequence_manager_factory import _SequenceManagerFactory
+
+        seq_manager = _SequenceManagerFactory._build_manager()
+        seq = seq_manager._create(name, tasks, subscribers or [], properties or {}, self.id, self.version)
+
+        _sequences = _Reloader()._reload(self._MANAGER_NAME, self)._sequences
+        _sequences.update(
+            {
+                name: {
+                    self._SEQUENCE_TASKS_KEY: tasks,
+                    self._SEQUENCE_PROPERTIES_KEY: properties or {},
+                    self._SEQUENCE_SUBSCRIBERS_KEY: subscribers or [],
+                }
+            }
+        )
+        self.sequences = _sequences  # type: ignore
+        return seq
+
 
 @_make_event.register(Scenario)
 def _make_event_for_scenario(

+ 1 - 0
taipy/core/scenario/scenario_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 ScenarioId = NewType("ScenarioId", str)
+"""Type that holds a `Scenario^` identifier."""
 ScenarioId.__doc__ = """Type that holds a `Scenario^` identifier."""

+ 104 - 74
taipy/core/sequence/sequence.py

@@ -46,14 +46,14 @@ class Sequence(_Entity, Submittable, _Labeled):
     a sequence dedicated to preprocessing and preparing data, a sequence for computing a
     training model, and a sequence dedicated to scoring.
 
-    !!! Example
+    ??? Example
 
         Let's assume we have a scenario configuration modelling a manufacturer that is
         training an ML model, predicting sales forecasts, and finally, based on
         the forecasts, planning its production. Three task are configured and linked
         together through data nodes.
 
-        ![sequences](../img/sequences.svg){ align=left }
+        ![sequences](../../../../img/sequences.svg){ align=left }
 
         First, the sales sequence (boxed in green in the picture) contains **training**
         and **predict** tasks. Second, a production sequence (boxed in dark gray in the
@@ -111,15 +111,6 @@ class Sequence(_Entity, Submittable, _Labeled):
         ```
 
     Note that the sequences are not necessarily disjoint and may share some tasks.
-
-    Attributes:
-        properties (dict[str, Any]): A dictionary of additional properties.
-        tasks (List[Task^]): The list of `Task`s.
-        sequence_id (str): The Unique identifier of the sequence.
-        owner_id (str):  The identifier of the owner (scenario_id, cycle_id) or None.
-        parent_ids (Optional[Set[str]]): The set of identifiers of the parent scenarios.
-        version (str): The string indicates the application version of the sequence to instantiate. If not provided,
-            the latest version is used.
     """
 
     _ID_PREFIX = "SEQUENCE"
@@ -127,6 +118,9 @@ class Sequence(_Entity, Submittable, _Labeled):
     _MANAGER_NAME = "sequence"
     __CHECK_INIT_DONE_ATTR_NAME = "_init_done"
 
+    id: SequenceId
+    """The unique identifier of the sequence."""
+
     def __init__(
         self,
         properties: Dict[str, Any],
@@ -146,15 +140,12 @@ class Sequence(_Entity, Submittable, _Labeled):
         self._version = version or _VersionManagerFactory._build_manager()._get_latest_version()
         self._init_done = True
 
-    @staticmethod
-    def _new_id(sequence_name: str, scenario_id) -> SequenceId:
-        seq_id = sequence_name.replace(" ", "TPSPACE")
-        return SequenceId(Sequence._SEPARATOR.join([Sequence._ID_PREFIX, _validate_id(seq_id), scenario_id]))
-
-    def __hash__(self):
+    def __hash__(self) -> int:
+        """Return the hash of the sequence."""
         return hash(self.id)
 
-    def __eq__(self, other):
+    def __eq__(self, other) -> bool:
+        """Check if a sequence is equal to another sequence."""
         return isinstance(other, Sequence) and self.id == other.id
 
     def __setattr__(self, name: str, value: Any) -> None:
@@ -167,7 +158,21 @@ class Sequence(_Entity, Submittable, _Labeled):
             except AttributeError:
                 return super().__setattr__(name, value)
 
-    def __getattr__(self, attribute_name):
+    def __getattr__(self, attribute_name: str):
+        """Get the attribute of the sequence.
+
+        The attribute can be a task or a data node.
+
+        Parameters:
+            attribute_name (str): The attribute name.
+
+        Returns:
+            The attribute value.
+
+        Raises:
+            AttributeError: If the attribute is not found.
+
+        """
         protected_attribute_name = _validate_id(attribute_name)
         tasks = self._get_tasks()
         if protected_attribute_name in tasks:
@@ -182,15 +187,17 @@ class Sequence(_Entity, Submittable, _Labeled):
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def tasks(self) -> Dict[str, Task]:
+        """The dictionary of tasks used by the sequence."""
         return self._get_tasks()
 
     @tasks.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def tasks(self, tasks: Union[List[TaskId], List[Task]]):
+    def tasks(self, tasks: Union[List[TaskId], List[Task]]) -> None:
         self._tasks = tasks
 
     @property
     def data_nodes(self) -> Dict[str, DataNode]:
+        """The dictionary of data nodes used by the sequence."""
         data_nodes = {}
         list_data_nodes = [task.data_nodes for task in self._get_tasks().values()]
         for data_node in list_data_nodes:
@@ -199,74 +206,49 @@ class Sequence(_Entity, Submittable, _Labeled):
         return data_nodes
 
     @property
-    def parent_ids(self):
+    def parent_ids(self) -> Set[str]:
+        """The set of identifiers of the parent scenarios."""
         return self._parent_ids
 
     @property
-    def owner_id(self):
+    def owner_id(self) -> Optional[str]:
+        """The identifier of the owner (scenario_id, cycle_id) or None."""
         return self._owner_id
 
     @property
-    def version(self):
+    def version(self) -> str:
+        """The application version of the sequence.
+
+        The string indicates the application version of the sequence. If not
+        provided, the latest version is used."""
         return self._version
 
     @property
-    def properties(self):
+    def properties(self) -> _Properties:
+        """The dictionary of additional properties."""
         self._properties = _Reloader()._reload("sequence", self)._properties
         return self._properties
 
-    def _is_consistent(self) -> bool:
-        dag = self._build_dag()
-        if dag.number_of_nodes() == 0:
-            return True
-        if not nx.is_directed_acyclic_graph(dag):
-            return False
-        if not nx.is_weakly_connected(dag):
-            return False
-        for left_node, right_node in dag.edges:
-            if (isinstance(left_node, DataNode) and isinstance(right_node, Task)) or (
-                isinstance(left_node, Task) and isinstance(right_node, DataNode)
-            ):
-                continue
-            return False
-        return True
-
-    def _get_tasks(self) -> Dict[str, Task]:
-        from ..task._task_manager_factory import _TaskManagerFactory
-
-        tasks = {}
-        task_manager = _TaskManagerFactory._build_manager()
-        for task_or_id in self._tasks:
-            t = task_manager._get(task_or_id, task_or_id)
-            if not isinstance(t, Task):
-                raise NonExistingTask(task_or_id)
-            tasks[t.config_id] = t
-        return tasks
-
-    def _get_set_of_tasks(self) -> Set[Task]:
-        from ..task._task_manager_factory import _TaskManagerFactory
-
-        tasks = set()
-        task_manager = _TaskManagerFactory._build_manager()
-        for task_or_id in self._tasks:
-            task = task_manager._get(task_or_id, task_or_id)
-            if not isinstance(task, Task):
-                raise NonExistingTask(task_or_id)
-            tasks.add(task)
-        return tasks
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def subscribers(self):
+    def subscribers(self) -> _ListAttributes:
+        """The list of callbacks to be called on `Job^`'s status change."""
         return self._subscribers
 
     @subscribers.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def subscribers(self, val):
+    def subscribers(self, val) -> None:
         self._subscribers = _ListAttributes(self, val)
 
-    def get_parents(self):
-        """Get parents of the sequence entity"""
+    def get_parents(self) -> Dict[str, Set[_Entity]]:
+        """Get parent scenarios of the sequence.
+
+        Returns:
+            The dictionary of all parent entities.
+                They are grouped by their type (Scenario^, Sequences^, or tasks^) so each key corresponds
+                to a level of the parents and the value is a set of the parent entities.
+                An empty dictionary is returned if the entity does not have parents.
+        """
         from ... import core as tp
 
         return tp.get_parents(self)
@@ -275,29 +257,31 @@ class Sequence(_Entity, Submittable, _Labeled):
         self,
         callback: Callable[[Sequence, Job], None],
         params: Optional[List[Any]] = None,
-    ):
+    ) -> None:
         """Subscribe a function to be called on `Job^` status change.
         The subscription is applied to all jobs created from the sequence's execution.
 
+        Note:
+            Notification will be available only for jobs created after this subscription.
+
         Parameters:
             callback (Callable[[Sequence^, Job^], None]): The callable function to be called on
                 status change.
             params (Optional[List[Any]]): The parameters to be passed to the _callback_.
-        Note:
-            Notification will be available only for jobs created after this subscription.
         """
         from ... import core as tp
 
         return tp.subscribe_sequence(callback, params, self)
 
-    def unsubscribe(self, callback: Callable[[Sequence, Job], None], params: Optional[List[Any]] = None):
+    def unsubscribe(self, callback: Callable[[Sequence, Job], None], params: Optional[List[Any]] = None) -> None:
         """Unsubscribe a function that is called when the status of a `Job^` changes.
 
+        Note:
+            The function will continue to be called for ongoing jobs.
+
         Parameters:
             callback (Callable[[Sequence^, Job^], None]): The callable function to unsubscribe.
             params (Optional[List[Any]]): The parameters to be passed to the _callback_.
-        Note:
-            The function will continue to be called for ongoing jobs.
         """
         from ... import core as tp
 
@@ -325,6 +309,7 @@ class Sequence(_Entity, Submittable, _Labeled):
                 returning.<br/>
                 If not provided and *wait* is True, the function waits indefinitely.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             A `Submission^` containing the information of the submission.
         """
@@ -348,6 +333,51 @@ class Sequence(_Entity, Submittable, _Labeled):
         """
         return self._get_simple_label()
 
+    @staticmethod
+    def _new_id(sequence_name: str, scenario_id) -> SequenceId:
+        seq_id = sequence_name.replace(" ", "TPSPACE")
+        return SequenceId(Sequence._SEPARATOR.join([Sequence._ID_PREFIX, _validate_id(seq_id), scenario_id]))
+
+    def _is_consistent(self) -> bool:
+        dag = self._build_dag()
+        if dag.number_of_nodes() == 0:
+            return True
+        if not nx.is_directed_acyclic_graph(dag):
+            return False
+        if not nx.is_weakly_connected(dag):
+            return False
+        for left_node, right_node in dag.edges:
+            if (isinstance(left_node, DataNode) and isinstance(right_node, Task)) or (
+                isinstance(left_node, Task) and isinstance(right_node, DataNode)
+            ):
+                continue
+            return False
+        return True
+
+    def _get_tasks(self) -> Dict[str, Task]:
+        from ..task._task_manager_factory import _TaskManagerFactory
+
+        tasks = {}
+        task_manager = _TaskManagerFactory._build_manager()
+        for task_or_id in self._tasks:
+            t = task_manager._get(task_or_id, task_or_id)
+            if not isinstance(t, Task):
+                raise NonExistingTask(task_or_id)
+            tasks[t.config_id] = t
+        return tasks
+
+    def _get_set_of_tasks(self) -> Set[Task]:
+        from ..task._task_manager_factory import _TaskManagerFactory
+
+        tasks = set()
+        task_manager = _TaskManagerFactory._build_manager()
+        for task_or_id in self._tasks:
+            task = task_manager._get(task_or_id, task_or_id)
+            if not isinstance(task, Task):
+                raise NonExistingTask(task_or_id)
+            tasks.add(task)
+        return tasks
+
 
 @_make_event.register(Sequence)
 def _make_event_for_sequence(

+ 1 - 0
taipy/core/sequence/sequence_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 SequenceId = NewType("SequenceId", str)
+"""Type that holds a `Sequence^` identifier."""
 SequenceId.__doc__ = """Type that holds a `Sequence^` identifier."""

+ 93 - 60
taipy/core/submission/submission.py

@@ -36,17 +36,7 @@ class Submission(_Entity, _Labeled):
     The submission holds the jobs created by the execution of the submittable and the
     `SubmissionStatus^`. The status is lively updated by Taipy during the execution of the jobs.
 
-    Attributes:
-        entity_id (str): The identifier of the entity that was submitted.
-        id (str): The identifier of the `Submission^` entity.
-        jobs (Optional[Union[List[Job], List[JobId]]]): A list of jobs.
-        properties (dict[str, Any]): A dictionary of additional properties.
-        creation_date (Optional[datetime]): The date of this submission's creation.
-        submission_status (Optional[SubmissionStatus]): The current status of this submission.
-        version (Optional[str]): The string indicates the application version of the submission to instantiate.
-            If not provided, the latest version is used.
-
-    !!! example
+    ??? example
 
         ```python
         import taipy as tp
@@ -81,6 +71,9 @@ class Submission(_Entity, _Labeled):
     __SEPARATOR = "_"
     lock = threading.Lock()
 
+    id: SubmissionId
+    """The identifier of the `Submission` entity."""
+
     def __init__(
         self,
         entity_id: str,
@@ -113,35 +106,63 @@ class Submission(_Entity, _Labeled):
         self._blocked_jobs: Set = set()
         self._pending_jobs: Set = set()
 
-    @staticmethod
-    def __new_id() -> SubmissionId:
-        """Generate a unique Submission identifier."""
-        return SubmissionId(Submission.__SEPARATOR.join([Submission._ID_PREFIX, str(uuid.uuid4())]))
+    def __lt__(self, other) -> bool:
+        """Compare the creation date of two submissions."""
+        return self.creation_date.timestamp() < other.creation_date.timestamp()
+
+    def __le__(self, other) -> bool:
+        """Compare the creation date of two submissions."""
+        return self.creation_date.timestamp() <= other.creation_date.timestamp()
+
+    def __gt__(self, other) -> bool:
+        """Compare the creation date of two submissions."""
+        return self.creation_date.timestamp() > other.creation_date.timestamp()
+
+    def __ge__(self, other) -> bool:
+        """Compare the creation date of two submissions."""
+        return self.creation_date.timestamp() >= other.creation_date.timestamp()
+
+    def __hash__(self) -> int:
+        return hash(self.id)
+
+    def __eq__(self, other) -> bool:
+        """Check if a submission is equal to another submission."""
+        return isinstance(other, Submission) and self.id == other.id
 
     @property
     def entity_id(self) -> str:
+        """The identifier of the entity that was submitted."""
         return self._entity_id
 
     @property
     def entity_type(self) -> str:
+        """The type of the entity that was submitted."""
         return self._entity_type
 
     @property
     def entity_config_id(self) -> Optional[str]:
+        """The config id of the entity that was submitted."""
         return self._entity_config_id
 
     @property
-    def properties(self):
+    def properties(self) -> _Properties:
+        """A dictionary of additional properties."""
         self._properties = _Reloader()._reload(self._MANAGER_NAME, self)._properties
         return self._properties
 
     @property
-    def creation_date(self):
+    def creation_date(self) -> datetime:
+        """The date and time when the submission was created."""
         return self._creation_date
 
     @property
     @_self_reload(_MANAGER_NAME)
     def submitted_at(self) -> Optional[datetime]:
+        """The date and time when the submission was submitted.
+
+        The submitted date and time corresponds to the date and time of the first job
+        that was submitted. If no job was submitted, the submitted date and time is None.
+        """
         jobs_submitted_at = [job.submitted_at for job in self.jobs if job.submitted_at]
         if jobs_submitted_at:
             return min(jobs_submitted_at)
@@ -150,6 +171,11 @@ class Submission(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def run_at(self) -> Optional[datetime]:
+        """The date and time when the submission was run.
+
+        The run date and time corresponds to the date and time of the first job
+        that was run. If no job was run, the run date and time is None.
+        """
         jobs_run_at = [job.run_at for job in self.jobs if job.run_at]
         if jobs_run_at:
             return min(jobs_run_at)
@@ -158,6 +184,12 @@ class Submission(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def finished_at(self) -> Optional[datetime]:
+        """The date and time when the submission was finished.
+
+        The finished date and time corresponds to the date and time of the last job
+        that was completed. If at least one of the jobs is not finished, the finished
+        date and time is None.
+        """
         if all(job.finished_at for job in self.jobs):
             return max([job.finished_at for job in self.jobs if job.finished_at])
         return None
@@ -165,14 +197,12 @@ class Submission(_Entity, _Labeled):
     @property
     @_self_reload(_MANAGER_NAME)
     def execution_duration(self) -> Optional[float]:
-        """Get the duration of the submission execution in seconds.
-        The execution time is the duration from the first job running to the last job completion.
+        """The duration of the submission execution in seconds.
 
-        Returns:
-            Optional[float]: The duration of the job execution in seconds.
-                - If no job was run, None is returned.
-                - If one of the jobs is not finished, the execution time is the duration
-                  from the running time of the first job to the current time.
+        The execution duration in seconds is the duration from the first job running
+        to the last job completion. If no job was run, the execution duration is None.
+        If at least one job is not finished, the execution duration is the duration
+        from the first job running time to the current time.
         """
         if self.finished_at and self.run_at:
             return (self.finished_at - self.run_at).total_seconds()
@@ -180,25 +210,10 @@ class Submission(_Entity, _Labeled):
             return (datetime.now() - self.run_at).total_seconds()
         return None
 
-    def get_label(self) -> str:
-        """Returns the submission simple label prefixed by its owner label.
-
-        Returns:
-            The label of the submission as a string.
-        """
-        return self._get_label()
-
-    def get_simple_label(self) -> str:
-        """Returns the submission simple label.
-
-        Returns:
-            The simple label of the submission as a string.
-        """
-        return self._get_simple_label()
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def jobs(self) -> List[Job]:
+        """The list of jobs created by the submission."""
         from ..job._job_manager_factory import _JobManagerFactory
 
         job_manager = _JobManagerFactory._build_manager()
@@ -206,70 +221,83 @@ class Submission(_Entity, _Labeled):
 
     @jobs.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def jobs(self, jobs: Union[List[Job], List[JobId]]):
+    def jobs(self, jobs: Union[List[Job], List[JobId]]) -> None:
         self._jobs = jobs
 
-    def __hash__(self):
-        return hash(self.id)
-
-    def __eq__(self, other):
-        return isinstance(other, Submission) and self.id == other.id
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def submission_status(self) -> SubmissionStatus:
+        """The status of the submission."""
         return self._submission_status
 
     @submission_status.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def submission_status(self, submission_status):
+    def submission_status(self, submission_status) -> None:
         self._submission_status = submission_status
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def is_abandoned(self) -> bool:
+        """Indicate if the submission is abandoned."""
         return self._is_abandoned
 
     @is_abandoned.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def is_abandoned(self, val):
+    def is_abandoned(self, val) -> None:
         self._is_abandoned = val
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def is_completed(self) -> bool:
+        """Indicate if the submission is completed."""
         return self._is_completed
 
     @is_completed.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def is_completed(self, val):
+    def is_completed(self, val) -> None:
         self._is_completed = val
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
     def is_canceled(self) -> bool:
+        """Indicate if the submission is canceled."""
         return self._is_canceled
 
     @is_canceled.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def is_canceled(self, val):
+    def is_canceled(self, val) -> None:
         self._is_canceled = val
 
-    def __lt__(self, other):
-        return self.creation_date.timestamp() < other.creation_date.timestamp()
+    @property
+    def version(self) -> str:
+        """The application version of the submission.
 
-    def __le__(self, other):
-        return self.creation_date.timestamp() <= other.creation_date.timestamp()
+        The string indicates the application version of the submission. If not
+        provided, the latest version is used."""
+        return self._version
 
-    def __gt__(self, other):
-        return self.creation_date.timestamp() > other.creation_date.timestamp()
+    def get_label(self) -> str:
+        """Returns the submission simple label prefixed by its owner label.
 
-    def __ge__(self, other):
-        return self.creation_date.timestamp() >= other.creation_date.timestamp()
+        Returns:
+            The label of the submission as a string.
+        """
+        return self._get_label()
+
+    def get_simple_label(self) -> str:
+        """Returns the submission simple label.
+
+        Returns:
+            The simple label of the submission as a string.
+        """
+        return self._get_simple_label()
 
     def is_finished(self) -> bool:
         """Indicate if the submission is finished.
 
+        A submission is considered as finished if its submission status is
+        `COMPLETED`, `FAILED`, or `CANCELED`.
+
         Returns:
             True if the submission is finished.
         """
@@ -284,12 +312,17 @@ class Submission(_Entity, _Labeled):
 
         Returns:
             A ReasonCollection object that can function as a Boolean value,
-            which is True if the submission can be deleted. False otherwise.
+                which is True if the submission can be deleted. False otherwise.
         """
         from ... import core as tp
 
         return tp.is_deletable(self)
 
+    @staticmethod
+    def __new_id() -> SubmissionId:
+        """Generate a unique Submission identifier."""
+        return SubmissionId(Submission.__SEPARATOR.join([Submission._ID_PREFIX, str(uuid.uuid4())]))
+
 
 @_make_event.register(Submission)
 def _make_event_for_submission(

+ 1 - 0
taipy/core/submission/submission_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 SubmissionId = NewType("SubmissionId", str)
+"""Type that holds a `Submission^` identifier."""
 SubmissionId.__doc__ = """Type that holds a `Submission^` identifier."""

+ 9 - 9
taipy/core/taipy.py

@@ -298,12 +298,12 @@ def exists(
     """Check if an entity with the specified identifier exists.
 
     This function checks if an entity with the given identifier exists.
-    It supports various types of entity identifiers, including `TaskId^`,
-    `DataNodeId^`, `SequenceId^`, `ScenarioId^`, `JobId^`, `CycleId^`, `SubmissionId^`, and string
+    It supports various types of entity identifiers, including `TaskId`,
+    `DataNodeId`, `SequenceId`, `ScenarioId`, `JobId`, `CycleId`, `SubmissionId`, and string
     representations.
 
     Parameters:
-        entity_id (Union[DataNodeId^, TaskId^, SequenceId^, ScenarioId^, JobId^, CycleId^, SubmissionId^, str]): The
+        entity_id (Union[DataNodeId, TaskId, SequenceId, ScenarioId, JobId, CycleId, SubmissionId, str]): The
             identifier of the entity to check for existence.
 
     Returns:
@@ -429,8 +429,8 @@ def is_deletable(entity: Union[Scenario, Job, Submission, ScenarioId, JobId, Sub
             job or submission to check.
 
     Returns:
-        A ReasonCollection object that can function as a Boolean value,
-        which is True if the given scenario, job or submission can be deleted. False otherwise.
+        A ReasonCollection object that can function as a Boolean value, which is True
+            if the given scenario, job or submission can be deleted. False otherwise.
     """
     if isinstance(entity, Job):
         return _JobManagerFactory._build_manager()._is_deletable(entity)
@@ -461,16 +461,16 @@ def delete(entity_id: Union[TaskId, DataNodeId, SequenceId, ScenarioId, JobId, C
     - If a `SequenceId` is provided, the related jobs are deleted.
     - If a `TaskId` is provided, the related data nodes, and jobs are deleted.
     - If a `DataNodeId` is provided, the data node is deleted.
-    - If a `SubmissionId^` is provided, the related jobs are deleted.
+    - If a `SubmissionId` is provided, the related jobs are deleted.
       The submission can only be deleted if the execution has been finished.
-    - If a `JobId^` is provided, the job entity can only be deleted if the execution has been finished.
+    - If a `JobId` is provided, the job entity can only be deleted if the execution has been finished.
 
     Parameters:
         entity_id (Union[TaskId, DataNodeId, SequenceId, ScenarioId, SubmissionId, JobId, CycleId]):
             The identifier of the entity to delete.
 
     Raises:
-        ModelNotFound: No entity corresponds to the specified *entity_id*.
+        ModelNotFound^: No entity corresponds to the specified *entity_id*.
     """
     if _is_job(entity_id):
         job_manager = _JobManagerFactory._build_manager()
@@ -740,7 +740,7 @@ def subscribe_sequence(
 
 def unsubscribe_sequence(
     callback: Callable[[Sequence, Job], None], params: Optional[List[Any]] = None, sequence: Optional[Sequence] = None
-):
+) -> None:
     """Unsubscribe a function that is called when the status of a Job changes.
 
     Parameters:

+ 47 - 23
taipy/core/task/task.py

@@ -42,7 +42,7 @@ class Task(_Entity, _Labeled):
     A task's attributes (the input data nodes, the output data nodes, the Python
     function) are populated based on its task configuration `TaskConfig^`.
 
-    !!! Example
+    ??? Example
 
         ```python
         import taipy as tp
@@ -99,6 +99,9 @@ class Task(_Entity, _Labeled):
     _MANAGER_NAME = "task"
     __CHECK_INIT_DONE_ATTR_NAME = "_init_done"
 
+    id: TaskId
+    """The unique identifier of the task."""
+
     def __init__(
         self,
         config_id: str,
@@ -124,10 +127,11 @@ class Task(_Entity, _Labeled):
         self._properties = _Properties(self, **properties)
         self._init_done = True
 
-    def __hash__(self):
+    def __hash__(self) -> int:
         return hash(self.id)
 
-    def __eq__(self, other):
+    def __eq__(self, other) -> bool:
+        """Check if a task is equal to another task."""
         return isinstance(other, Task) and self.id == other.id
 
     def __getstate__(self):
@@ -146,7 +150,7 @@ class Task(_Entity, _Labeled):
             except AttributeError:
                 return super().__setattr__(name, value)
 
-    def __getattr__(self, attribute_name):
+    def __getattr__(self, attribute_name) -> Any:
         protected_attribute_name = _validate_id(attribute_name)
         if protected_attribute_name in self.input:
             return self.input[protected_attribute_name]
@@ -155,74 +159,80 @@ class Task(_Entity, _Labeled):
         raise AttributeError(f"{attribute_name} is not an attribute of task {self.id}")
 
     @property
-    def properties(self):
+    def properties(self) -> _Properties:
+        """Dictionary of additional properties."""
         self._properties = _Reloader()._reload(self._MANAGER_NAME, self)._properties
         return self._properties
 
     @property
-    def config_id(self):
+    def config_id(self) -> str:
+        """The identifier of the `TaskConfig^`."""
         return self._config_id
 
     @property
-    def owner_id(self):
+    def owner_id(self) -> Optional[str]:
+        """The identifier of the owner (scenario_id or cycle_id) or None."""
         return self._owner_id
 
-    def get_parents(self):
-        """Get parents of the task."""
-        from ... import core as tp
-
-        return tp.get_parents(self)
-
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def parent_ids(self):
+    def parent_ids(self) -> Set[str]:
+        """The set of identifiers of the parent scenarios."""
         return self._parent_ids
 
     @property
     def input(self) -> Dict[str, DataNode]:
+        """The dictionary of input data nodes."""
         return self._input
 
     @property
     def output(self) -> Dict[str, DataNode]:
+        """The dictionary of output data nodes."""
         return self._output
 
     @property
     def data_nodes(self) -> Dict[str, DataNode]:
+        """The dictionary of input and output data nodes."""
         return {**self.input, **self.output}
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def function(self):
+    def function(self) -> Callable:
+        """The python function to execute."""
         return self._function
 
     @function.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def function(self, val):
+    def function(self, val) -> None:
         self._function = val
 
     @property  # type: ignore
     @_self_reload(_MANAGER_NAME)
-    def skippable(self):
+    def skippable(self) -> bool:
+        """True if the task can be skipped if no change has been made on inputs. False otherwise"""
         return self._skippable
 
     @skippable.setter  # type: ignore
     @_self_setter(_MANAGER_NAME)
-    def skippable(self, val):
+    def skippable(self, val) -> None:
         self._skippable = val
 
     @property
     def scope(self) -> Scope:
-        """Retrieve the lowest scope of the task based on its data nodes.
+        """The lowest scope of the task's data nodes.
 
-        Returns:
-            The lowest scope present in input and output data nodes or GLOBAL if there are
-                either no input or no output.
+        The lowest scope present in input and output data nodes or GLOBAL if there are
+        either no input or no output.
         """
         data_nodes = list(self._input.values()) + list(self._output.values())
         return Scope(min(dn.scope for dn in data_nodes)) if len(data_nodes) != 0 else Scope.GLOBAL
 
     @property
-    def version(self):
+    def version(self) -> str:
+        """The application version of the task.
+
+        The string indicates the application version of the task. If not
+        provided, the latest version is used."""
         return self._version
 
     def submit(
@@ -245,6 +255,7 @@ class Task(_Entity, _Labeled):
                 returning.<br/>
                 If not provided and *wait* is True, the function waits indefinitely.
             **properties (dict[str, any]): A keyworded variable length list of additional arguments.
+
         Returns:
             A `Submission^` containing the information of the submission.
         """
@@ -252,6 +263,19 @@ class Task(_Entity, _Labeled):
 
         return _TaskManagerFactory._build_manager()._submit(self, callbacks, force, wait, timeout, **properties)
 
+    def get_parents(self) -> Dict[str, Set[_Entity]]:
+        """Get the parent scenarios of the task.
+
+        Returns:
+            The dictionary of all parent entities.
+                They are grouped by their type (Scenario^, Sequences^, or tasks^) so each key corresponds
+                to a level of the parents and the value is a set of the parent entities.
+                An empty dictionary is returned if the entity does not have parents.
+        """
+        from ... import core as tp
+
+        return tp.get_parents(self)
+
     def get_label(self) -> str:
         """Returns the task simple label prefixed by its owner label.
 

+ 1 - 0
taipy/core/task/task_id.py

@@ -12,4 +12,5 @@
 from typing import NewType
 
 TaskId = NewType("TaskId", str)
+"""Type that holds a `Task^` identifier."""
 TaskId.__doc__ = """Type that holds a `Task^` identifier."""

+ 2 - 2
taipy/gui/__init__.py

@@ -48,8 +48,8 @@ application.
     add functionality to Taipy GUI:
 
     - [`python-magic`](https://pypi.org/project/python-magic/): identifies image format
-      from byte buffers so the [`image`](../../../refmans/gui/viselements/generic/image.md) control can
-      display them, and so that [`file_download`](../../../refmans/gui/viselements/generic/file_download.md)
+      from byte buffers so the [`image`](../../../../refmans/gui/viselements/generic/image.md) control can
+      display them, and so that [`file_download`](../../../../refmans/gui/viselements/generic/file_download.md)
       can request the browser to display the image content when relevant.<br/>
       You can install that package with the regular `pip install python-magic` command
       (then potentially `pip install python-magic` on Windows),

+ 4 - 4
taipy/gui/_gui_section.py

@@ -48,18 +48,18 @@ class _GuiSection(UniqueSection):
 
     @staticmethod
     def _configure(**properties) -> "_GuiSection":
-        """NOT DOCUMENTED
-        Configure the Graphical User Interface.
+        """Configure the Graphical User Interface.
 
         Parameters:
             **properties (dict[str, any]): Keyword arguments that configure the behavior of the `Gui^` instances.<br/>
                 Please refer to the gui config section
-                [page](../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
+                [page](../../../../../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
                 of the User Manual for more information on the accepted arguments.
+
         Returns:
             The GUI configuration.
 
-        """
+        """  # noqa: E501
         section = _GuiSection(property_list=list(default_config), **properties)
         TaipyConfig._register(section)
         return TaipyConfig.unique_sections[_GuiSection.name]

+ 4 - 4
taipy/gui/_renderers/__init__.py

@@ -139,7 +139,7 @@ class Markdown(_Renderer):
     user interfaces.
 
     You can find details on the Taipy Markdown-specific syntax and how to add
-    Taipy Visual Elements in the [section on Markdown](../../userman/gui/pages/markdown.md)
+    Taipy Visual Elements in the [section on Markdown](../../../../../userman/gui/pages/markdown.md)
     of the User Manual.
     """
 
@@ -153,7 +153,7 @@ class Markdown(_Renderer):
                 template content.
 
         The `Markdown` constructor supports the *style* parameter as explained in the
-        [section on Styling](../../userman/gui/styling/index.md#style-sheets) and in the
+        [section on Styling](../../../../../userman/gui/styling/index.md#style-sheets) and in the
         `(taipy.gui.Page.)set_style()^` method.
         """
         kwargs["content"] = content
@@ -171,7 +171,7 @@ class Html(_Renderer):
     user interfaces.
 
     You can find details on HTML-specific constructs and how to add
-    Taipy Visual Elements in the [section on HTML](../../userman/gui/pages/html.md)
+    Taipy Visual Elements in the [section on HTML](../../../../../userman/gui/pages/html.md)
     of the User Manual.
     """
 
@@ -185,7 +185,7 @@ class Html(_Renderer):
                 template content.
 
         The `Html` constructor supports the *style* parameter as explained in the
-        [section on Styling](../../userman/gui/styling/index.md#style-sheets) and in the
+        [section on Styling](../../../../../userman/gui/styling/index.md#style-sheets) and in the
         `(taipy.gui.Page.)set_style()^` method.
         """
         kwargs["content"] = content

+ 3 - 2
taipy/gui/builder/_element.py

@@ -144,7 +144,8 @@ class _Element(ABC):
         return None
 
     def __embed_object(self, obj: t.Any, is_expression=True) -> str:
-        """Embed an object in the caller frame
+        """NOT DOCUMENTED
+        Embed an object in the caller frame
 
         Return the Taipy expression of the embedded object
         """
@@ -288,7 +289,7 @@ class content(_Control):
     by the content of the page the user navigates to.
 
     The usage of this pseudo-element is described in
-    [this page](../../userman/gui/pages/index.md#application-header-and-footer).
+    [this page](../../../../../../userman/gui/pages/index.md#application-header-and-footer).
     """
 
     def _render(self, gui: "Gui") -> str:

+ 2 - 2
taipy/gui/builder/page.py

@@ -20,7 +20,7 @@ class Page(_Renderer):
     """Page generator for the Builder API.
 
     This class is used to create a page created with the Builder API.<br/>
-    Instance of this class can be added to the application using `Gui.add_page()^`.
+    Instances of this class can be added to the application using `Gui.add_page()^`.
 
     This class is typically be used as a Python Context Manager to add the elements.<br/>
     Here is how you can create a single-page application, creating the elements with code:
@@ -48,7 +48,7 @@ class Page(_Renderer):
                 The default creates a `part` where several elements can be stored.
 
         The `Page` constructor supports the *style* parameter as explained in the
-        [section on Styling](../../userman/gui/styling/index.md#style-sheets) and in the
+        [section on Styling](../../../../../../userman/gui/styling/index.md#style-sheets) and in the
         `(taipy.gui.Page.)set_style()^` method.
         """
         if element is None:

+ 8 - 8
taipy/gui/gui.py

@@ -299,7 +299,7 @@ class Gui:
                 of the main Python file is allowed.
             env_filename (Optional[str]): An optional file from which to load application
                 configuration variables (see the
-                [Configuration](../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
+                [Configuration](../../../../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
                 section of the User Manual for details.)<br/>
                 The default value is "taipy.gui.env"
             libraries (Optional[List[ElementLibrary]]): An optional list of extension library
@@ -425,7 +425,8 @@ class Gui:
     def register_content_provider(content_type: type, content_provider: t.Callable[..., str]) -> None:
         """Add a custom content provider.
 
-        The application can use custom content for the `part` block when its *content* property is set to an object with type *type*.
+        The application can use custom content for the `part` block when its *content* property
+        is set to an object with type *type*.
 
         Arguments:
             content_type: The type of the content that triggers the content provider.
@@ -1553,8 +1554,7 @@ class Gui:
     ) -> t.Any:
         """Invoke a user callback for a given state.
 
-        See the
-        [section on Long Running Callbacks in a Thread](../../userman/gui/callbacks.md#long-running-callbacks-in-a-thread)
+        See the [section on Long Running Callbacks in a Thread](../../../../../userman/gui/callbacks.md#long-running-callbacks-in-a-thread)
         in the User Manual for details on when and how this function can be used.
 
         Arguments:
@@ -2094,8 +2094,8 @@ class Gui:
     ) -> Partial:
         """Create a new `Partial^`.
 
-        The [User Manual section on Partials](../../userman/gui/pages/partial/index.md) gives details on
-        when and how to use this class.
+        The [User Manual section on Partials](../../../../../userman/gui/pages/partial/index.md)
+        gives details on when and how to use this class.
 
         Arguments:
             page (Union[str, Page^]): The page to create a new Partial from.<br/>
@@ -2701,7 +2701,7 @@ class Gui:
         URL that `Gui` serves. The default is to listen to the *localhost* address
         (127.0.0.1) on the port number 5000. However, the configuration of this `Gui`
         object may impact that (see the
-        [Configuration](../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
+        [Configuration](../../../../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
         section of the User Manual for details).
 
         Arguments:
@@ -2727,7 +2727,7 @@ class Gui:
                 Also note that setting the *debug* argument to True forces *async_mode* to "threading".
             **kwargs (dict[str, any]): Additional keyword arguments that configure how this `Gui` is run.
                 Please refer to the gui config section
-                [page](../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
+                [page](../../../../../userman/advanced_features/configuration/gui-config.md#configuring-the-gui-instance)
                 of the User Manual for more information.
 
         Returns:

+ 5 - 5
taipy/gui/gui_actions.py

@@ -34,7 +34,7 @@ def download(
         - a string: the value must be an existing path name to the file that gets downloaded or
           the URL to the resource you want to download.
         - a buffer (such as a `bytes` object): if the size of the buffer is smaller than the
-          [*data_url_max_size*](../../userman/advanced_features/configuration/gui-config.md#p-data_url_max_size)
+          [*data_url_max_size*](../../../../../userman/advanced_features/configuration/gui-config.md#p-data_url_max_size)
           configuration setting, then the [`python-magic`](https://pypi.org/project/python-magic/)
           package is used to determine the [MIME type](https://en.wikipedia.org/wiki/Media_type)
           of the buffer content, and the download is performed using a generated "data:" URL with
@@ -163,7 +163,7 @@ def navigate(
     Arguments:
         state (State^): The current user state as received in any callback.
         to: The name of the page to navigate to. This can be a page identifier (as created by
-            `Gui.add_page()^` with no leading '/') or an URL.<br/>
+            `Gui.add_page()^` with no leading '/') or a URL.<br/>
             If omitted, the application navigates to the root page.
         params: A dictionary of query parameters.
         tab: When navigating to a page that is not a known page, the page is opened in a tab identified by
@@ -208,7 +208,7 @@ def get_state_id(state: State) -> t.Optional[str]:
     The state identifier is a string generated by Taipy GUI for a given `State^` that is used
     to serialize callbacks.
     See the
-    [User Manual section on Long Running Callbacks](../../userman/gui/callbacks.md#long-running-callbacks)
+    [User Manual section on Long Running Callbacks](../../../../../userman/gui/callbacks.md#long-running-callbacks)
     for details on when and how this function can be used.
 
     Arguments:
@@ -330,7 +330,7 @@ def invoke_long_callback(
     user_status_function_args: t.Optional[t.Union[t.Tuple, t.List]] = None,
     period=0,
 ):
-    """Invoke a long running user callback.
+    """Invoke a long-running user callback.
 
     Long-running callbacks are run in a separate thread to not block the application itself.
 
@@ -339,7 +339,7 @@ def invoke_long_callback(
     *user_function* is finished (successfully or not), or periodically (using the *period* parameter).
 
     See the
-    [User Manual section on Long Running Callbacks](../../userman/gui/callbacks.md#long-running-callbacks)
+    [User Manual section on Long Running Callbacks](../../../../../userman/gui/callbacks.md#long-running-callbacks)
     for details on when and how this function can be used.
 
     Arguments:

+ 3 - 2
taipy/gui/icon.py

@@ -15,8 +15,9 @@ import typing as t
 class Icon:
     """Small image in the User Interface.
 
-    Icons are typically used in controls like [button](../../refmans/gui/viselements/generic/button.md)
-    or items in a [menu](../../refmans/gui/viselements/generic/menu.md).
+    Icons are typically used in controls like
+    [button](../../../../../refmans/gui/viselements/generic/button.md)
+    or items in a [menu](../../../../../refmans/gui/viselements/generic/menu.md).
 
     Attributes:
         path (str): The path to the image file.

+ 3 - 3
taipy/gui/partial.py

@@ -28,9 +28,9 @@ class Partial(_Page):
     and not related pages. This allows not to have to repeat yourself when
     creating your page templates.
 
-    Visual elements such as [`part`](../../refmans/gui/viselements/generic/part.md),
-    [`dialog`](../../refmans/gui/viselements/generic/dialog.md) or
-    [`pane`](../../refmans/gui/viselements/generic/pane.md) can use Partials.
+    Visual elements such as [`part`](../../../../../refmans/gui/viselements/generic/part.md),
+    [`dialog`](../../../../../refmans/gui/viselements/generic/dialog.md) or
+    [`pane`](../../../../../refmans/gui/viselements/generic/pane.md) can use Partials.
 
     Note that `Partial` has no constructor (no `__init__()` method): to create a
     `Partial`, you must call the `Gui.add_partial()^` function.

+ 2 - 2
taipy/gui/state.py

@@ -29,14 +29,14 @@ class State:
     """Accessor to the bound variables from callbacks.
 
     `State` is used when you need to access the value of variables
-    bound to visual elements (see [Binding](../../userman/gui/binding.md)).<br/>
+    bound to visual elements (see [Binding](../../../../../userman/gui/binding.md)).<br/>
     Because each browser connected to the application server may represent and
     modify any variable at any moment as the result of user interaction, each
     connection holds its own set of variables along with their values. We call
     the set of these the application variables the application _state_, as seen
     by a given client.
 
-    Each callback (see [Callbacks](../../userman/gui/callbacks.md)) receives a specific
+    Each callback (see [Callbacks](../../../../../userman/gui/callbacks.md)) receives a specific
     instance of the `State` class, where you can find all the variables bound to
     visual elements in your application.
 

+ 2 - 2
taipy/rest/__init__.py

@@ -13,13 +13,13 @@
 
 The Taipy Rest package exposes the Runnable `Rest^` service to expose REST APIs on top of Taipy Core
 functionalities, in particular the scenario and data management. (more details
-on the [user manual](../../../userman/scenario_features/sdm/index.md)).
+on the [user manual](../../../../userman/scenario_features/sdm/index.md)).
 
 Once the `Rest^` service runs, users can call REST APIs to create, read, update, submit and remove Taipy entities
 (including cycles, scenarios, sequences, tasks, jobs, and data nodes). It is handy when it comes to integrating a
 Taipy application in a more complex IT ecosystem.
 
-Please refer to [REST API](../../reference_rest/index.md) page to get the exhaustive list of available APIs."""
+Please refer to [REST API](../../../reference_rest/index.md) page to get the exhaustive list of available APIs."""
 
 from ._init import *
 from .version import _get_version

+ 0 - 4
taipy/rest/api/resources/datanode.py

@@ -97,7 +97,6 @@ class DataNodeResource(Resource):
                     "last_edit_date": "2022-08-10T16:03:40.855082",
                     "job_ids": [],
                     "version": "latest",
-                    "cacheable": false,
                     "validity_days": null,
                     "validity_seconds": null,
                     "edit_in_progress": false,
@@ -138,7 +137,6 @@ class DataNodeResource(Resource):
                     "last_edit_date": "2022-08-10T16:03:40.855082",
                     "job_ids": [],
                     "version": "latest",
-                    "cacheable": false,
                     "validity_days": null,
                     "validity_seconds": null,
                     "edit_in_progress": false,
@@ -306,7 +304,6 @@ class DataNodeList(Resource):
                         "last_edit_date": "2022-08-10T16:03:40.855082",
                         "job_ids": [],
                         "version": "latest",
-                        "cacheable": false,
                         "validity_days": null,
                         "validity_seconds": null,
                         "edit_in_progress": false,
@@ -345,7 +342,6 @@ class DataNodeList(Resource):
                         "last_edit_date": "2022-08-10T16:03:40.855082",
                         "job_ids": [],
                         "version": "latest",
-                        "cacheable": false,
                         "validity_days": null,
                         "validity_seconds": null,
                         "edit_in_progress": false,

+ 0 - 2
taipy/rest/api/schemas/datanode.py

@@ -23,7 +23,6 @@ class DataNodeSchema(Schema):
     last_edit_date = fields.String()
     job_ids = fields.List(fields.String)
     version = fields.String()
-    cacheable = fields.Boolean()
     validity_days = fields.Float()
     validity_seconds = fields.Float()
     edit_in_progress = fields.Boolean()
@@ -34,7 +33,6 @@ class DataNodeConfigSchema(Schema):
     name = fields.String()
     storage_type = fields.String()
     scope = fields.Integer()
-    cacheable = fields.Boolean()
 
     @pre_dump
     def serialize_scope(self, obj, **kwargs):

+ 1 - 1
tests/common/config/utils/checker_for_tests.py

@@ -9,8 +9,8 @@
 # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 # specific language governing permissions and limitations under the License.
 
-from taipy.common.config import IssueCollector
 from taipy.common.config.checker._checkers._config_checker import _ConfigChecker
+from taipy.common.config.checker.issue_collector import IssueCollector
 
 
 class CheckerForTest(_ConfigChecker):

+ 0 - 56
tests/core/config/test_data_node_config.py

@@ -236,7 +236,6 @@ def test_data_node_getitem():
     assert Config.data_nodes[data_node_id].storage_type == data_node_config.storage_type
     assert Config.data_nodes[data_node_id].scope == data_node_config.scope
     assert Config.data_nodes[data_node_id].properties == data_node_config.properties
-    assert Config.data_nodes[data_node_id].cacheable == data_node_config.cacheable
 
 
 def test_data_node_creation_no_duplication():
@@ -328,9 +327,6 @@ def test_block_datanode_config_update_in_development_mode():
     with pytest.raises(ConfigurationUpdateBlocked):
         data_node_config.scope = Scope.SCENARIO
 
-    with pytest.raises(ConfigurationUpdateBlocked):
-        data_node_config.cacheable = True
-
     with pytest.raises(ConfigurationUpdateBlocked):
         data_node_config.properties = {"foo": "bar"}
 
@@ -365,9 +361,6 @@ def test_block_datanode_config_update_in_standalone_mode():
     with pytest.raises(ConfigurationUpdateBlocked):
         data_node_config.scope = Scope.SCENARIO
 
-    with pytest.raises(ConfigurationUpdateBlocked):
-        data_node_config.cacheable = True
-
     with pytest.raises(ConfigurationUpdateBlocked):
         data_node_config.properties = {"foo": "bar"}
 
@@ -412,52 +405,3 @@ def test_clean_config():
     assert dn1_config.validity_period is dn2_config.validity_period is None
     assert dn1_config.default_path is dn2_config.default_path is None
     assert dn1_config.properties == dn2_config.properties == {}
-
-
-def test_deprecated_cacheable_attribute_remains_compatible():
-    dn_1_id = "dn_1_id"
-    dn_1_config = Config.configure_data_node(
-        id=dn_1_id,
-        storage_type="pickle",
-        cacheable=False,
-        scope=Scope.SCENARIO,
-    )
-    assert Config.data_nodes[dn_1_id].id == dn_1_id
-    assert Config.data_nodes[dn_1_id].storage_type == "pickle"
-    assert Config.data_nodes[dn_1_id].scope == Scope.SCENARIO
-    assert Config.data_nodes[dn_1_id].properties == {"cacheable": False}
-    assert not Config.data_nodes[dn_1_id].cacheable
-    dn_1_config.cacheable = True
-    assert Config.data_nodes[dn_1_id].properties == {"cacheable": True}
-    assert Config.data_nodes[dn_1_id].cacheable
-
-    dn_2_id = "dn_2_id"
-    dn_2_config = Config.configure_data_node(
-        id=dn_2_id,
-        storage_type="pickle",
-        cacheable=True,
-        scope=Scope.SCENARIO,
-    )
-    assert Config.data_nodes[dn_2_id].id == dn_2_id
-    assert Config.data_nodes[dn_2_id].storage_type == "pickle"
-    assert Config.data_nodes[dn_2_id].scope == Scope.SCENARIO
-    assert Config.data_nodes[dn_2_id].properties == {"cacheable": True}
-    assert Config.data_nodes[dn_2_id].cacheable
-    dn_2_config.cacheable = False
-    assert Config.data_nodes[dn_1_id].properties == {"cacheable": False}
-    assert not Config.data_nodes[dn_1_id].cacheable
-
-    dn_3_id = "dn_3_id"
-    dn_3_config = Config.configure_data_node(
-        id=dn_3_id,
-        storage_type="pickle",
-        scope=Scope.SCENARIO,
-    )
-    assert Config.data_nodes[dn_3_id].id == dn_3_id
-    assert Config.data_nodes[dn_3_id].storage_type == "pickle"
-    assert Config.data_nodes[dn_3_id].scope == Scope.SCENARIO
-    assert Config.data_nodes[dn_3_id].properties == {}
-    assert not Config.data_nodes[dn_3_id].cacheable
-    dn_3_config.cacheable = True
-    assert Config.data_nodes[dn_3_id].properties == {"cacheable": True}
-    assert Config.data_nodes[dn_3_id].cacheable

+ 0 - 50
tests/core/config/test_task_config.py

@@ -13,7 +13,6 @@ import os
 from unittest import mock
 
 from taipy.common.config import Config
-from taipy.common.config.common.scope import Scope
 from taipy.core.config import DataNodeConfig
 from tests.core.utils.named_temporary_file import NamedTemporaryFile
 
@@ -181,52 +180,3 @@ def test_clean_config():
     assert task1_config.output_configs == task1_config.output_configs == []
     assert task1_config.skippable is task1_config.skippable is False
     assert task1_config.properties == task1_config.properties == {}
-
-
-def test_deprecated_cacheable_attribute_remains_compatible():
-    dn_1_id = "dn_1_id"
-    dn_1_config = Config.configure_data_node(
-        id=dn_1_id,
-        storage_type="pickle",
-        cacheable=False,
-        scope=Scope.SCENARIO,
-    )
-    assert Config.data_nodes[dn_1_id].id == dn_1_id
-    assert Config.data_nodes[dn_1_id].storage_type == "pickle"
-    assert Config.data_nodes[dn_1_id].scope == Scope.SCENARIO
-    assert Config.data_nodes[dn_1_id].properties == {"cacheable": False}
-    assert not Config.data_nodes[dn_1_id].cacheable
-    dn_1_config.cacheable = True
-    assert Config.data_nodes[dn_1_id].properties == {"cacheable": True}
-    assert Config.data_nodes[dn_1_id].cacheable
-
-    dn_2_id = "dn_2_id"
-    dn_2_config = Config.configure_data_node(
-        id=dn_2_id,
-        storage_type="pickle",
-        cacheable=True,
-        scope=Scope.SCENARIO,
-    )
-    assert Config.data_nodes[dn_2_id].id == dn_2_id
-    assert Config.data_nodes[dn_2_id].storage_type == "pickle"
-    assert Config.data_nodes[dn_2_id].scope == Scope.SCENARIO
-    assert Config.data_nodes[dn_2_id].properties == {"cacheable": True}
-    assert Config.data_nodes[dn_2_id].cacheable
-    dn_2_config.cacheable = False
-    assert Config.data_nodes[dn_1_id].properties == {"cacheable": False}
-    assert not Config.data_nodes[dn_1_id].cacheable
-
-    dn_3_id = "dn_3_id"
-    dn_3_config = Config.configure_data_node(
-        id=dn_3_id,
-        storage_type="pickle",
-        scope=Scope.SCENARIO,
-    )
-    assert Config.data_nodes[dn_3_id].id == dn_3_id
-    assert Config.data_nodes[dn_3_id].storage_type == "pickle"
-    assert Config.data_nodes[dn_3_id].scope == Scope.SCENARIO
-    assert Config.data_nodes[dn_3_id].properties == {}
-    assert not Config.data_nodes[dn_3_id].cacheable
-    dn_3_config.cacheable = True
-    assert Config.data_nodes[dn_3_id].properties == {"cacheable": True}
-    assert Config.data_nodes[dn_3_id].cacheable

+ 0 - 12
tests/core/data/test_data_node.py

@@ -652,18 +652,6 @@ class TestDataNode:
             data_node.get_parents()
             mck.assert_called_once_with(data_node)
 
-    def test_cacheable_deprecated_false(self):
-        dn = FakeDataNode("foo")
-        with pytest.warns(DeprecationWarning):
-            _ = dn.cacheable
-        assert dn.cacheable is False
-
-    def test_cacheable_deprecated_true(self):
-        dn = FakeDataNode("foo", properties={"cacheable": True})
-        with pytest.warns(DeprecationWarning):
-            _ = dn.cacheable
-        assert dn.cacheable is True
-
     def test_data_node_with_env_variable_value_not_stored(self):
         dn_config = Config.configure_data_node("A", prop="ENV[FOO]")
         with mock.patch.dict(os.environ, {"FOO": "bar"}):

+ 1 - 1
tests/core/data/test_write_parquet_data_node.py

@@ -175,7 +175,7 @@ class TestWriteParquetDataNode:
                 "write_kwargs": {"compression": comp2},
             },
         )
-        dn.write_with_kwargs(df, compression=comp1)
+        dn._write_with_kwargs(df, compression=comp1)
         df.to_parquet(path=temp_file_2_path, compression=comp1, engine=engine)
         with open(temp_file_2_path, "rb") as tf:
             with pathlib.Path(temp_file_path).open("rb") as f:

+ 0 - 1
tests/rest/json/expected/datanode.json

@@ -11,7 +11,6 @@
     "validity_days": null,
     "validity_seconds": null,
     "edit_in_progress": false,
-    "cacheable": false,
     "data_node_properties": {
     },
     "version": "1.0"