Added similar input options across tools

- Remove sub-process for calling sub scripts
- Updated readme
- Module test for tools
This commit is contained in:
Vivek Kumar Dutta 2021-07-21 09:55:29 +00:00
parent 3743c16d4b
commit f8535772a2
25 changed files with 187283 additions and 2724 deletions

View file

@ -7,6 +7,7 @@ stages:
- unit_test - unit_test
- functional_test - functional_test
- functional_api_test - functional_api_test
- tools_test
- uspd - uspd
variables: variables:
@ -54,6 +55,21 @@ run_functional_api_test:
- timestamp.log - timestamp.log
- functional-api-test-coverage.xml - functional-api-test-coverage.xml
run_tools_test:
stage: tools_test
image: iopsys/code-analysis:latest
allow_failure: false
script:
- "./gitlab-ci/tools-test.sh"
artifacts:
when: always
paths:
- timestamp.log
- tools/out/datamodel_default.xml
- tools/out/datamodel_hdm.xml
- tools/out/datamodel.xls
run_uspd: run_uspd:
stage: uspd stage: uspd
variables: variables:

235
README.md
View file

@ -50,19 +50,19 @@ As mentioned above, all Data Models are stored in the **'dmtree'** folder. In or
`bbfdm` library offers a tool to generate templates of the source code from json files placed under **'dmtree/json'**. So, any developer can fill these json files ([tr181](/dmtree/json/tr181.json) or [tr104](/dmtree/json/tr104.json)) with mapping field according to UCI, UBUS or CLI commands then generate the source code in C. `bbfdm` library offers a tool to generate templates of the source code from json files placed under **'dmtree/json'**. So, any developer can fill these json files ([tr181](/dmtree/json/tr181.json) or [tr104](/dmtree/json/tr104.json)) with mapping field according to UCI, UBUS or CLI commands then generate the source code in C.
```bash ```bash
$ python generate_dm_c.py $ ./convert_dm_json_to_c.py
Usage: generate_dm_c.py <data model name> [Object path] Usage: convert_dm_json_to_c.py <data model name> [Object path]
data model name: The data model(s) to be used, for ex: tr181 or tr181,tr104 data model name: The data model(s) to be used, for ex: tr181 or tr181,tr104
Examples: Examples:
- generate_dm_c.py tr181 - convert_dm_json_to_c.py tr181
==> Generate the C code of tr181 data model in datamodel/ folder ==> Generate the C code of tr181 data model in datamodel/ folder
- generate_dm_c.py tr104 - convert_dm_json_to_c.py tr104
==> Generate the C code of tr104 data model in datamodel/ folder ==> Generate the C code of tr104 data model in datamodel/ folder
- generate_dm_c.py tr181,tr104 - convert_dm_json_to_c.py tr181,tr104
==> Generate the C code of tr181 and tr104 data model in datamodel/ folder ==> Generate the C code of tr181 and tr104 data model in datamodel/ folder
- generate_dm_c.py tr181 Device.DeviceInfo. - convert_dm_json_to_c.py tr181 Device.DeviceInfo.
==> Generate the C code of Device.DeviceInfo object in datamodel/ folder ==> Generate the C code of Device.DeviceInfo object in datamodel/ folder
- generate_dm_c.py tr104 Device.Services.VoiceService.{i}.Capabilities. - convert_dm_json_to_c.py tr104 Device.Services.VoiceService.{i}.Capabilities.
==> Generate the C code of Device.Services.VoiceService.{i}.Capabilities. object in datamodel/ folder ==> Generate the C code of Device.Services.VoiceService.{i}.Capabilities. object in datamodel/ folder
``` ```
@ -402,7 +402,7 @@ The application should bring its JSON file under **'/etc/bbfdm/json/'** path wit
} }
``` ```
**2. Object with instace:** **2. Object with instance:**
- **UCI command:** uci show wireless | grep wifi-device - **UCI command:** uci show wireless | grep wifi-device
@ -623,87 +623,186 @@ The application should bring its JSON file under **'/etc/bbfdm/json/'** path wit
- For more examples on JSON files, you can see these links: [X_IOPSYS_EU_MCPD](https://dev.iopsys.eu/feed/broadcom/-/blob/devel/mcpd/files/etc/bbfdm/json/X_IOPSYS_EU_MCPD.json), [UserInterface](/test/files/etc/bbfdm/json/UserInterface.json), [X_IOPSYS_EU_Dropbear](/test/files/etc/bbfdm/json/X_IOPSYS_EU_Dropbear.json) - For more examples on JSON files, you can see these links: [X_IOPSYS_EU_MCPD](https://dev.iopsys.eu/feed/broadcom/-/blob/devel/mcpd/files/etc/bbfdm/json/X_IOPSYS_EU_MCPD.json), [UserInterface](/test/files/etc/bbfdm/json/UserInterface.json), [X_IOPSYS_EU_Dropbear](/test/files/etc/bbfdm/json/X_IOPSYS_EU_Dropbear.json)
## BBFDM Tools ## BBFDM Tools
BBF tools are written in python3 and has below dependencies.
### XML generator System utilities: python3-pip, libxml2-utils
It is a generator of Data Model tree in XML with two format: **Broadband Forum schema** and **HDM**.
```bash ```bash
$ python generate_dm_xml.py -h $ sudo apt install -y python3-pip
Usage: generate_dm_xml.py [options...] <urls> $ sudo apt install -y libxml2-utils
Options: ```
-r, --remote-dm Check OBJ/PARAM under these repositories if it is not found under bbf repo Python utilities: jsonschema, xlwt
-v, --vendor-list Generate data model tree with vendor extension OBJ/PARAM ```bash
-p, --vendor-prefix Generate data model tree using this vendor prefix. Default vendor prefix: X_IOPSYS_EU_ $ pip3 install jsonschema xlwt
-f, --format Generate data model tree with HDM format. Default format: BBF
-d, --device-protocol Generate data model tree using this device protocol. Default device protocol: DEVICE_PROTOCOL_DSLFTR069v1
-m, --manufacturer Generate data model tree using this manufacturer. Default manufacturer: iopsys
-o, --manufacturer-oui Generate data model tree using this manufacturer oui. Default manufacturer oui: 002207
-c, --product-class Generate data model tree using this product class. Default product class: DG400PRIME
-n, --model-name Generate data model tree using this model name. Default model name: DG400PRIME-A
-s, --software-version Generate data model tree using this software version. Default software version: 1.2.3.4
-h, --help This help text
Urls:
url^(branch,hash,tag) The url with branch, hash or tag to be used
Examples:
- python generate_dm_xml.py
==> Generate xml file in datamodel.xml
- python generate_dm_xml.py -f HDM
==> Generate xml file with HDM format in datamodel.xml
- python generate_dm_xml.py -v iopsys
==> Generate xml file using iopsys extension in datamodel.xml
- python generate_dm_xml.py -r https://dev.iopsys.eu/feed/iopsys.git^devel,https://dev.iopsys.eu/iopsys/mydatamodel.git^5c8e7cb740dc5e425adf53ea574fb529d2823f88
==> Generate xml file in datamodel.xml
- python generate_dm_xml.py -v iopsys,openwrt,test -r https://dev.iopsys.eu/feed/iopsys.git^6.0.0ALPHA1 -p X_TEST_COM_
==> Generate xml file in datamodel.xml
``` ```
### JSON generator | Tools | Description |
| ----------------------- |:------------------------------------------------------------:|
|convert_dm_json_to_c.py | Convert json mapping to C code for dynamic plugins library. |
|convert_dm_xml_to_json.py| Convert standart xml to Json format. |
|generate_dm.py | Generate list of supported/un-supported parameters based of json input|
|generate_dm_xml.py | Generate list of supported/un-supported parameters in xml format |
|generate_dm_excel.py | Generate list of supported/un-supported parameters in xls format |
It is a generator of Data Model JSON format > Note: Currently all the tools needs to be executed in tools directory.
### XML->JSON convertor
It is a [python script](./tools/convert_dm_xml_to_json.py) to convert Data Model from Broadband Forum XML format to JSON format.
```bash ```bash
$ python generate_dm_json.py $ ./convert_dm_xml_to_json.py
Usage: generate_dm_json.py <tr-xxx cwmp xml data model> <tr-xxx usp xml data model> [Object path] Usage: ./convert_dm_xml_to_json.py <tr-xxx cwmp xml data model> <tr-xxx usp xml data model> [Object path]
Examples: Examples:
- generate_dm_json.py tr-181-2-14-1-cwmp-full.xml tr-181-2-14-1-usp-full.xml Device. - ./convert_dm_xml_to_json.py tr-181-2-14-1-cwmp-full.xml tr-181-2-14-1-usp-full.xml Device.
==> Generate the json file of the sub tree Device. in tr181.json ==> Generate the json file of the sub tree Device. in tr181.json
- generate_dm_json.py tr-104-2-0-2-cwmp-full.xml tr-104-2-0-2-usp-full.xml Device.Services.VoiceService. - ./convert_dm_xml_to_json.py tr-104-2-0-2-cwmp-full.xml tr-104-2-0-2-usp-full.xml Device.Services.VoiceService.
==> Generate the json file of the sub tree Device.Services.VoiceService. in tr104.json ==> Generate the json file of the sub tree Device.Services.VoiceService. in tr104.json
- generate_dm_json.py tr-106-1-2-0-full.xml Device. - ./convert_dm_xml_to_json.py tr-106-1-2-0-full.xml Device.
==> Generate the json file of the sub tree Device. in tr106.json ==> Generate the json file of the sub tree Device. in tr106.json
Example of xml data model file: https://www.broadband-forum.org/cwmp/tr-181-2-14-1-cwmp-full.xml Example of xml data model file: https://www.broadband-forum.org/cwmp/tr-181-2-14-1-cwmp-full.xml
``` ```
### Excel generator ### XML generator
It is a generator of excel sheet with supported and unsupported Data Model tree. [Python script](./tools/generate_dm_xml.py) to generator list of supported and un-supported Data Model tree in XML for acs supported format: **Broadband Forum schema** and **HDM**.
```bash ```bash
$ python generate_dm_excel.py $ ./generate_dm_xml.py -h
Usage: generate_dm_excel.py <data model name> [options...] <urls> usage: generate_dm_xml.py [-h] [-r https://dev.iopsys.eu/iopsys/stunc.git^devel] [-v iopsys] [-p X_IOPSYS_EU_] [-d DEVICE_PROTOCOL_DSLFTR069v1] [-m iopsys] [-u 002207] [-c DG400PRIME] [-n DG400PRIME-A]
data model name: The data model(s) to be used, for ex: tr181 or tr181,tr104 [-s 1.2.3.4] [-f BBF] [-o datamodel.xml]
Options:
-r, --remote-dm Check OBJ/PARAM under these repositories if it is not found under bbf repo
-v, --vendor-list Generate data model tree with vendor extension OBJ/PARAM
-p, --vendor-prefix Generate data model tree using this vendor prefix. Default vendor prefix: X_IOPSYS_EU_
-h, --help This help text
Urls:
url^(branch,hash,tag) The url with branch, hash or tag to be used
Examples: Script to generate list of supported and non-supported parameter in xml format
- python generate_dm_excel.py tr181
==> Generate excel file in datamodel.xls optional arguments:
- python generate_dm_excel.py tr104 -h, --help show this help message and exit
==> Generate excel file in datamodel.xls -r https://dev.iopsys.eu/iopsys/stunc.git^devel, --remote-dm https://dev.iopsys.eu/iopsys/stunc.git^devel
- python generate_dm_excel.py tr181,tr104 -r https://dev.iopsys.eu/feed/iopsys.git^release-5.3,https://dev.iopsys.eu/iopsys/mydatamodel.git^5c8e7cb740dc5e425adf53ea574fb529d2823f88 Includes OBJ/PARAM defined under remote repositories defined as bbf plugin
==> Generate excel file in datamodel.xls -v iopsys, --vendor-list iopsys
- python generate_dm_excel.py tr181,tr104 -v iopsys,openwrt,test -r https://dev.iopsys.eu/feed/iopsys.git^6.0.0ALPHA1 -p X_TEST_COM_ Generate data model tree with vendor extension OBJ/PARAM.
==> Generate excel file in datamodel.xls -p X_IOPSYS_EU_, --vendor-prefix X_IOPSYS_EU_
Generate data model tree using provided vendor prefix for vendor defined objects.
-d DEVICE_PROTOCOL_DSLFTR069v1, --device-protocol DEVICE_PROTOCOL_DSLFTR069v1
Generate data model tree using this device protocol.
-m iopsys, --manufacturer iopsys
Generate data model tree using this manufacturer.
-u 002207, --manufacturer-oui 002207
Generate data model tree using this manufacturer oui.
-c DG400PRIME, --product-class DG400PRIME
Generate data model tree using this product class.
-n DG400PRIME-A, --model-name DG400PRIME-A
Generate data model tree using this model name.
-s 1.2.3.4, --software-version 1.2.3.4
Generate data model tree using this software version.
-f BBF, --format BBF Generate data model tree with HDM format.
-o datamodel.xml, --output datamodel.xml
Generate the output file with given name
Part of BBF-tools, refer Readme for more examples
``` ```
More examples:
```bash
$ ./generate_dm_xml.py -v iopsys -v openwrt
```
### Excel generator
[Python script](./tools/generate_dm_excel.py) to generat list of supported and un-supported parameters in excel sheet.
```bash
$ ./generate_dm_excel.py -h
usage: generate_dm_excel.py [-h] -d tr181 [-r https://dev.iopsys.eu/iopsys/stunc.git^devel] [-v iopsys] [-p X_IOPSYS_EU_] [-o supported_datamodel.xls]
Script to generate list of supported and non-supported parameter in xls format
optional arguments:
-h, --help show this help message and exit
-d tr181, --datamodel tr181
-r https://dev.iopsys.eu/iopsys/stunc.git^devel, --remote-dm https://dev.iopsys.eu/iopsys/stunc.git^devel
Includes OBJ/PARAM defined under remote repositories defined as bbf plugin
-v iopsys, --vendor-list iopsys
Generate data model tree with vendor extension OBJ/PARAM
-p X_IOPSYS_EU_, --vendor-prefix X_IOPSYS_EU_
Generate data model tree using provided vendor prefix for vendor defined objects
-o supported_datamodel.xls, --output supported_datamodel.xls
Generate the output file with given name
Part of BBF-tools, refer Readme for more examples
```
More examples:
```bash
$ ./generate_dm_excel.py -d tr181 -v iopsys -v openwrt -o datamodel.xls
$ ./generate_dm_excel.py -d tr181 -d tr104 -v iopsys -v openwrt -o datamodel.xls
```
### Data Model generator
This is a pipeline friendly master script to generate the list of supported and un-supported datamodels in xml and xls formats based on provided input in a json file.
Example json file available [here](./tools/tools_input.json).
```bash
$ Usage: generate_dm.py <input json file>
Examples:
- generate_dm.py tools_input.json
==> Generate all required files defined in tools_input.json file
```
The input json file should be defined as follow:
```bash
{
"manufacturer": "iopsys",
"protocol": "DEVICE_PROTOCOL_DSLFTR069v1",
"manufacturer_oui": "002207",
"product_class": "DG400PRIME",
"model_name": "DG400PRIME-A",
"software_version": "1.2.3.4",
"vendor_list": [
"iopsys",
"openwrt",
"test"
],
"vendor_prefix": "X_IOPSYS_EU_",
"plugins": [
{
"repo": "https://dev.iopsys.eu/iopsys/mydatamodel.git",
"version": "tag/hash/branch",
"dm_files": [
"src/datamodel.c",
"src/additional_datamodel.c"
]
},
{
"repo": "https://dev.iopsys.eu/iopsys/mybbfplugin.git",
"version": "tag/hash/branch",
"dm_files": [
"dm.c"
]
},
{
"repo": "https://dev.iopsys.eu/iopsys/mydatamodeljson.git",
"version": "tag/hash/branch",
"dm_files": [
"src/plugin/datamodel.json"
]
}
],
"output": {
"acs": [
"hdm",
"default"
],
"file_format": [
"xml",
"xls"
],
"output_dir": "./out",
"output_file_prefix": "datamodel"
}
}
```
- For more examples of tools input json file, you can see this link: [tools_input.json](./devel/tools/tools_input.json)
## Dependencies ## Dependencies
To successfully build libbbfdm, the following libraries are needed: To successfully build libbbfdm, the following libraries are needed:

File diff suppressed because it is too large Load diff

8307
dmtree/json/tr135.json Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -21,6 +21,17 @@ function exec_cmd()
fi fi
} }
function exec_cmd_verbose()
{
echo "executing $@"
$@
if [ $? -ne 0 ]; then
echo "Failed to execute $@"
exit 1
fi
}
function install_libbbf() function install_libbbf()
{ {
COV_CFLAGS='-fprofile-arcs -ftest-coverage' COV_CFLAGS='-fprofile-arcs -ftest-coverage'

78
gitlab-ci/tools-test.sh Executable file
View file

@ -0,0 +1,78 @@
#!/bin/bash
echo "Verification of BBF Tools"
pwd
source ./gitlab-ci/shared.sh
# install required packages
exec_cmd apt update
exec_cmd apt install -y python3-pip
exec_cmd apt install -y libxml2-utils
exec_cmd pip3 install jsonschema
exec_cmd pip3 install xlwt
exec_cmd pip3 install pylint
echo "Validating PEP8 syntax on tools"
exec_cmd_verbose pylint -d R,C,W0603 tools/*.py
echo "********* Validate JSON Plugin *********"
echo "Validate BBF TR-181 JSON Plugin"
./test/tools/validate_json_plugin.py dmtree/json/tr181.json
check_ret $?
echo "Validate BBF TR-104 JSON Plugin"
./test/tools/validate_json_plugin.py dmtree/json/tr104.json
check_ret $?
echo "Validate BBF TR-135 JSON Plugin"
./test/tools/validate_json_plugin.py dmtree/json/tr135.json
check_ret $?
echo "Validate X_IOPSYS_EU_Dropbear JSON Plugin"
./test/tools/validate_json_plugin.py test/files/etc/bbfdm/json/X_IOPSYS_EU_Dropbear.json
check_ret $?
echo "Validate UserInterface JSON Plugin"
./test/tools/validate_json_plugin.py test/files/etc/bbfdm/json/UserInterface.json
check_ret $?
echo "Validate TR-181 JSON Plugin afetr generating from XML"
json_path=$(./tools/convert_dm_xml_to_json.py test/tools/tr-181-2-14-1-cwmp-full.xml test/tools/tr-181-2-14-1-usp-full.xml Device.)
./test/tools/validate_json_plugin.py $json_path
check_ret $?
echo "Validate TR-104 JSON Plugin after generating from XML"
json_path=$(./tools/convert_dm_xml_to_json.py test/tools/tr-104-2-0-2-cwmp-full.xml test/tools/tr-104-2-0-2-usp-full.xml Device.Services.VoiceService.)
./test/tools/validate_json_plugin.py $json_path
check_ret $?
echo "Validate TR-135 JSON Plugin after generating from XML"
json_path=$(./tools/convert_dm_xml_to_json.py test/tools/tr-135-1-4-1-cwmp-full.xml test/tools/tr-135-1-4-1-usp-full.xml Device.Services.STBService.)
./test/tools/validate_json_plugin.py $json_path
check_ret $?
echo "********* Validate XML File *********"
cd tools
./generate_dm.py tools_input.json
check_ret $?
echo "Check if the required tools are generated"
[ ! -f "out/datamodel.xls" ] && echo "Excel file doesn't exist" && exit 1
[ ! -f "out/datamodel_hdm.xml" ] && echo "XML file with HDM format doesn't exist" && exit 1
[ ! -f "out/datamodel_default.xml" ] && echo "XML file with BBF format doesn't exist" && exit 1
cd ..
xmllint --schema test/tools/cwmp-datamodel-1-8.xsd tools/out/datamodel_default.xml --noout
#check_ret $? ## Need to be reviewed to remove all duplicate key-sequence
echo "********* Validate C File *********"
## TODO
date +%s > timestamp.log
echo "Tools Test :: PASS"

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,181 @@
#!/usr/bin/python3
# Copyright (C) 2021 iopsys Software Solutions AB
# Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com>
import os
import sys
import json
from jsonschema import validate
obj_schema = {
"definitions": {
"type_t": {
"type": "string",
"enum": [
"object"
]
},
"map_type_t": {
"type": "string",
"enum": [
"uci",
"ubus"
]
},
"protocols_t": {
"type": "string",
"enum": [
"cwmp",
"usp"
]
}
},
"type" : "object",
"properties" : {
"type" : {"$ref": "#/definitions/type_t"},
"version" : {"type": "string"},
"protocols" : {"type" : "array", "items" : {"$ref": "#/definitions/protocols_t"}},
"uniqueKeys" : {"type" : "array"},
"access" : {"type" : "boolean"},
"array" : {"type" : "boolean"},
"mapping" : {"type" : "object", "properties" : {
"type" : {"$ref": "#/definitions/map_type_t"},
"uci" : {"type" : "object", "properties" : {
"file" : {"type": "string"},
"section" : {"type": "object", "properties" : {
"type" : {"type": "string"}
}
},
"dmmapfile" : {"type": "string"}
}
},
"ubus" : {"type" : "object", "properties" : {
"object" : {"type": "string"},
"method" : {"type": "string"},
"args" : {"type": "object"},
"key" : {"type": "string"}
}
}
}
}
},
"required": [
"type",
"protocols",
"array"
]
}
param_schema = {
"definitions": {
"type_t": {
"type": "string",
"enum": [
"string",
"unsignedInt",
"unsignedLong",
"int",
"long",
"boolean",
"dateTime",
"hexBinary",
"base64",
"decimal"
]
},
"map_type_t": {
"type": "string",
"enum": [
"uci",
"ubus",
"procfs",
"sysfs"
]
},
"protocols_t": {
"type": "string",
"enum": [
"cwmp",
"usp"
]
}
},
"type" : "object",
"properties" : {
"type" : {"$ref": "#/definitions/type_t"},
"protocols" : {"type" : "array", "items" : {"$ref": "#/definitions/protocols_t"}},
"read" : {"type" : "boolean"},
"write" : {"type" : "boolean"},
"mapping" : {"type" : "array", "items" : {"type": "object", "properties" : {
"type" : {"$ref": "#/definitions/map_type_t"},
"uci" : {"type" : "object", "properties" : {
"file" : {"type": "string"},
"section" : {"type": "object", "properties" : {
"type" : {"type": "string"},
"index" : {"type": "string"}
}
},
"option" : {"type": "object", "properties" : {
"name" : {"type": "string"} }
}
}
},
"ubus" : {"type" : "object", "properties" : {
"object" : {"type": "string"},
"method" : {"type": "string"},
"args" : {"type": "object"},
"key" : {"type": "string"}
}
},
"procfs" : {"type" : "object", "properties" : {
"file" : {"type": "string"}
}
},
"sysfs" : {"type" : "object", "properties" : {
"file" : {"type": "string"}
}
}
}
}
}
},
"required": [
"type",
"protocols",
"read",
"write"
]
}
def print_validate_json_usage():
print("Usage: " + sys.argv[0] + " <dm json file>")
print("Examples:")
print(" - " + sys.argv[0] + " tr181.json")
print(" ==> Validate the json file")
print("")
exit(1)
def parse_value( key , value ):
if key.endswith('.') and not key.startswith('Device.'):
print(key + " is not a valid path")
exit(1)
validate(instance = value, schema = obj_schema if key.endswith('.') else param_schema)
for k, v in value.items():
if not k.endswith('()') and k != "list" and k != "mapping" and isinstance(v, dict):
parse_value(k, v)
### main ###
if len(sys.argv) < 2:
print_validate_json_usage()
json_file = open(sys.argv[1], "r")
json_data = json.loads(json_file.read())
for key, value in json_data.items():
parse_value(key , value)
print("JSON File is Valid")

3
tools/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
/.repo
/.data_model.txt
/out

View file

@ -6,6 +6,8 @@
import os import os
import subprocess import subprocess
import shutil import shutil
import json
from collections import OrderedDict
CURRENT_PATH = os.getcwd() CURRENT_PATH = os.getcwd()
BBF_TR181_ROOT_FILE = "device.c" BBF_TR181_ROOT_FILE = "device.c"
@ -18,57 +20,116 @@ BBF_DMTREE_PATH_TR104 = BBF_DMTREE_PATH + "/tr104"
BBF_DMTREE_PATH_TR143 = BBF_DMTREE_PATH + "/tr143" BBF_DMTREE_PATH_TR143 = BBF_DMTREE_PATH + "/tr143"
BBF_DMTREE_PATH_TR181_JSON = BBF_DMTREE_PATH + "/json/tr181.json" BBF_DMTREE_PATH_TR181_JSON = BBF_DMTREE_PATH + "/json/tr181.json"
BBF_DMTREE_PATH_TR104_JSON = BBF_DMTREE_PATH + "/json/tr104.json" BBF_DMTREE_PATH_TR104_JSON = BBF_DMTREE_PATH + "/json/tr104.json"
ARRAY_JSON_FILES = { "tr181" : BBF_DMTREE_PATH_TR181_JSON, DATA_MODEL_FILE = ".data_model.txt"
"tr104" : BBF_DMTREE_PATH_TR104_JSON} ARRAY_JSON_FILES = {"tr181": BBF_DMTREE_PATH_TR181_JSON,
LIST_DM_DIR = [BBF_DMTREE_PATH_TR181, BBF_DMTREE_PATH_TR104, BBF_DMTREE_PATH_TR143] "tr104": BBF_DMTREE_PATH_TR104_JSON}
LIST_DM_DIR = [BBF_DMTREE_PATH_TR181,
BBF_DMTREE_PATH_TR104, BBF_DMTREE_PATH_TR143]
LIST_IGNORED_LINE = ['/*', '//', '#'] LIST_IGNORED_LINE = ['/*', '//', '#']
LIST_OBJ = [] LIST_OBJ = []
LIST_PARAM = [] LIST_PARAM = []
LIST_SUPPORTED_DM = [] LIST_SUPPORTED_DM = []
def remove_file( file_name ): Array_Types = {"string": "DMT_STRING",
"unsignedInt": "DMT_UNINT",
"unsignedLong": "DMT_UNLONG",
"int": "DMT_INT",
"long": "DMT_LONG",
"boolean": "DMT_BOOL",
"dateTime": "DMT_TIME",
"hexBinary": "DMT_HEXBIN",
"base64": "DMT_BASE64"}
def rename_file(old_file_name, new_file_name):
try:
os.rename(old_file_name, new_file_name)
except OSError:
pass
def remove_file(file_name):
try: try:
os.remove(file_name) os.remove(file_name)
except OSError: except OSError:
pass pass
def create_folder( folder_name ):
def create_folder(folder_name):
try: try:
os.mkdir(folder_name) os.makedirs(folder_name, exist_ok = True)
except OSError: except OSError:
pass pass
def remove_folder( folder_name ): # rmtree exception handler
try: def rmtree_handler(_func, path, _exc_info):
shutil.rmtree(folder_name) print("Failed to remove %s" % path)
except:
pass
def cd_dir( path ): def remove_folder(folder_name):
if os.path.isdir(folder_name):
shutil.rmtree(folder_name, onerror = rmtree_handler)
def cd_dir(path):
try: try:
os.chdir(path) os.chdir(path)
except OSError: except OSError:
pass pass
def obj_has_child( value ):
def obj_has_child(value):
if isinstance(value, dict): if isinstance(value, dict):
for obj, val in value.items(): for _obj, val in value.items():
if isinstance(val, dict): if isinstance(val, dict):
for obj1, val1 in val.items(): for obj1, val1 in val.items():
if obj1 == "type" and val1 == "object": if obj1 == "type" and val1 == "object":
return 1 return 1
return 0 return 0
def obj_has_param( value ):
def obj_has_param(value):
if isinstance(value, dict): if isinstance(value, dict):
for obj, val in value.items(): for _obj, val in value.items():
if isinstance(val, dict): if isinstance(val, dict):
for obj1,val1 in val.items(): for obj1, val1 in val.items():
if obj1 == "type" and val1 != "object": if obj1 == "type" and val1 != "object":
return 1 return 1
return 0 return 0
def generate_datamodel_tree( filename ):
def get_option_value(value, option, default = None):
if isinstance(value, dict):
for obj, val in value.items():
if obj == option:
return val
return default
def get_param_type(value):
paramtype = get_option_value(value, "type")
return Array_Types.get(paramtype, None)
def clean_supported_dm_list():
LIST_SUPPORTED_DM.clear()
def fill_list_supported_dm():
fp = open(DATA_MODEL_FILE, 'r')
Lines = fp.readlines()
for line in Lines:
LIST_SUPPORTED_DM.append(line)
def fill_data_model_file():
fp = open(DATA_MODEL_FILE, 'a')
for value in LIST_SUPPORTED_DM:
print("%s" % value, file=fp)
fp.close()
def generate_datamodel_tree(filename):
if filename.endswith('.c') is False:
return
obj_found = 0 obj_found = 0
param_found = 0 param_found = 0
obj_found_in_list = 0 obj_found_in_list = 0
@ -79,19 +140,21 @@ def generate_datamodel_tree( filename ):
for line in fp: for line in fp:
if "DMOBJ" in line: if "DMOBJ" in line:
table_name = line[:line.index('[]')].rstrip('\n').replace("DMOBJ ", "") table_name = line[:line.index('[]')].rstrip(
'\n').replace("DMOBJ ", "")
obj_found = 1 obj_found = 1
continue continue
if "DMLEAF" in line: if "DMLEAF" in line:
table_name = line[:line.index('[]')].rstrip('\n').replace("DMLEAF ", "") table_name = line[:line.index('[]')].rstrip(
'\n').replace("DMLEAF ", "")
param_found = 1 param_found = 1
continue continue
if obj_found == 0 and param_found == 0: if obj_found == 0 and param_found == 0:
continue continue
if line.startswith(tuple(LIST_IGNORED_LINE)) == True: if line.startswith(tuple(LIST_IGNORED_LINE)) is True:
continue continue
if "{0}" in line: if "{0}" in line:
@ -102,7 +165,7 @@ def generate_datamodel_tree( filename ):
parent_obj = "" parent_obj = ""
continue continue
## Object Table # Object Table
if obj_found == 1: if obj_found == 1:
if obj_found_in_list == 0: if obj_found_in_list == 0:
for value in LIST_OBJ: for value in LIST_OBJ:
@ -113,16 +176,18 @@ def generate_datamodel_tree( filename ):
LIST_OBJ.remove(value) LIST_OBJ.remove(value)
obj = line.rstrip('\n').split(", ") obj = line.rstrip('\n').split(", ")
obj_name = parent_obj + obj[0].replace("{", "").replace("\"", "").replace("BBF_VENDOR_PREFIX", BBF_VENDOR_PREFIX) obj_name = parent_obj + obj[0].replace("{", "").replace("\"", "").replace(
obj_permission = obj[1].replace("&", "") "BBF_VENDOR_PREFIX", BBF_VENDOR_PREFIX).replace(" ", "")
obj_mulinst = obj[5].replace("&", "") obj_permission = obj[1].replace("&", "").replace(" ", "")
obj_mulinst = obj[5].replace("&", "").replace(" ", "")
if obj_mulinst == "NULL": if obj_mulinst == "NULL":
full_obj_name = obj_name + "." full_obj_name = obj_name + "."
else: else:
full_obj_name = obj_name + ".{i}." full_obj_name = obj_name + ".{i}."
LIST_SUPPORTED_DM.append(full_obj_name + "," + obj_permission + ",DMT_OBJ") LIST_SUPPORTED_DM.append(
full_obj_name + "," + obj_permission + ",DMT_OBJ")
if obj[8] != "NULL": if obj[8] != "NULL":
LIST_OBJ.append(full_obj_name + ":" + obj[8]) LIST_OBJ.append(full_obj_name + ":" + obj[8])
@ -130,7 +195,7 @@ def generate_datamodel_tree( filename ):
if obj[9] != "NULL": if obj[9] != "NULL":
LIST_PARAM.append(full_obj_name + ":" + obj[9]) LIST_PARAM.append(full_obj_name + ":" + obj[9])
## Parameter Table # Parameter Table
if param_found == 1: if param_found == 1:
if obj_found_in_list == 0: if obj_found_in_list == 0:
for value in LIST_PARAM: for value in LIST_PARAM:
@ -141,41 +206,41 @@ def generate_datamodel_tree( filename ):
LIST_PARAM.remove(value) LIST_PARAM.remove(value)
param = line.rstrip('\n').split(", ") param = line.rstrip('\n').split(", ")
param_name = parent_obj + param[0].replace("{", "").replace("\"", "").replace("BBF_VENDOR_PREFIX", BBF_VENDOR_PREFIX) param_name = parent_obj + param[0].replace("{", "").replace(
param_permission = param[1].replace("&", "") "\"", "").replace("BBF_VENDOR_PREFIX", BBF_VENDOR_PREFIX).replace(" ", "")
param_type = param[2] param_permission = param[1].replace("&", "").replace(" ", "")
param_type = param[2].replace(" ", "")
LIST_SUPPORTED_DM.append(param_name + "," + param_permission + "," + param_type)
LIST_SUPPORTED_DM.append(
param_name + "," + param_permission + "," + param_type)
fp.close() fp.close()
def generate_dynamic_datamodel_tree(filename):
if filename.endswith('.c') is False:
return
def generate_dynamic_datamodel_tree( filename ):
obj_found = 0 obj_found = 0
table_name = ""
fp = open(filename, 'r') fp = open(filename, 'r')
for line in fp: for line in fp:
if "DM_MAP_OBJ" in line: if "DM_MAP_OBJ" in line:
table_name = line[:line.index('[]')].rstrip('\n').replace("DM_MAP_OBJ ", "")
obj_found = 1 obj_found = 1
continue continue
if obj_found == 0: if obj_found == 0:
continue continue
if line.startswith(tuple(LIST_IGNORED_LINE)) == True: if line.startswith(tuple(LIST_IGNORED_LINE)) is True:
continue continue
if "{0}" in line: if "{0}" in line:
obj_found = 0 obj_found = 0
table_name = ""
continue continue
## Object Table # Object Table
if obj_found == 1: if obj_found == 1:
obj = line.rstrip('\n').split(", ") obj = line.rstrip('\n').split(", ")
obj_name = obj[0][1:].replace("\"", "") obj_name = obj[0][1:].replace("\"", "")
@ -189,33 +254,65 @@ def generate_dynamic_datamodel_tree( filename ):
fp.close() fp.close()
def parse_dynamic_json_datamodel_tree(obj, value):
obj_permission = "DMWRITE" if get_option_value(
value, "array") is True else "DMREAD"
LIST_SUPPORTED_DM.append(obj + "," + obj_permission + ",DMT_OBJ")
hasobj = obj_has_child(value)
hasparam = obj_has_param(value)
if hasparam and isinstance(value, dict):
for k, v in value.items():
if k != "mapping" and isinstance(v, dict):
for k1, v1 in v.items():
if k1 == "type" and v1 != "object":
param_name = obj + k
param_type = get_param_type(v)
param_permission = "DMWRITE" if get_option_value(
v, "write") is True else "DMREAD"
LIST_SUPPORTED_DM.append(
param_name + "," + param_permission + "," + param_type)
break
if hasobj and isinstance(value, dict):
for k, v in value.items():
if isinstance(v, dict):
for k1, v1 in v.items():
if k1 == "type" and v1 == "object":
parse_dynamic_json_datamodel_tree(k, v)
def generate_supported_dm( remote_dm, vendor_list ): def generate_dynamic_json_datamodel_tree(filename):
if filename.endswith('.json') is False:
return
json_file = open(filename, "r")
data = json.loads(json_file.read(), object_pairs_hook=OrderedDict)
for obj, value in data.items():
if obj is None or obj.startswith('Device.') is False:
continue
parse_dynamic_json_datamodel_tree(obj, value)
def generate_supported_dm(vendor_prefix=None, vendor_list=None, plugins=None):
''' '''
1/ Download Remote Data Model if needed 1/ Download Remote Data Model if needed
2/ Parse all Standard Data Model 2/ Parse all Standard Data Model
3/ Parse all Vendor Data Model if needed 3/ Parse all Vendor Data Model if needed
4/ Parse all Remote Data Model if needed 4/ Generate the list of Supported Data Model 'LIST_SUPPORTED_DM'
5/ Generate the list of Supported Data Model 'LIST_SUPPORTED_DM' 5/ Copy the supported data model in file 'DATA_MODEL_FILE'
''' '''
############## Download Remote Data Models ############## ############## SET BBF VENDOR PREFIX ##############
if remote_dm != None: if vendor_prefix is not None:
print("Start downloading remote data models...") global BBF_VENDOR_PREFIX
print("Download in progress........") BBF_VENDOR_PREFIX = vendor_prefix
dm_url = remote_dm.split(",")
for i in range(remote_dm.count(',') + 1):
url = dm_url[i].split("^")
subprocess.run(["git", "clone", "--depth=1", url[0], ".repo" + str(i)], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if url.count("^") == 1:
subprocess.run(["git", "-C", ".repo" + str(i), "checkout", url[1]], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
############## GEN Local BBF Data Models TREE ##############
############## GEN Standard BBF Data Models TREE ############## print("Generating the local data models...")
print("Start Generation of Supported Data Models...")
print("Please wait...")
cd_dir(BBF_DMTREE_PATH_TR181) cd_dir(BBF_DMTREE_PATH_TR181)
generate_datamodel_tree(BBF_TR181_ROOT_FILE) generate_datamodel_tree(BBF_TR181_ROOT_FILE)
@ -225,20 +322,18 @@ def generate_supported_dm( remote_dm, vendor_list ):
for DIR in LIST_DM_DIR: for DIR in LIST_DM_DIR:
cd_dir(DIR) cd_dir(DIR)
for root, dirs, files in os.walk("."): for _root, _dirs, files in os.walk("."):
for filename in files: for filename in files:
if ".h" in filename or filename == BBF_TR181_ROOT_FILE or filename == BBF_TR104_ROOT_FILE: if filename.endswith('.c') is False or filename == BBF_TR181_ROOT_FILE or filename == BBF_TR104_ROOT_FILE:
continue continue
generate_datamodel_tree(filename) generate_datamodel_tree(filename)
############## GEN Vendors BBF Data Models TREE ############## ############## GEN Vendors BBF Data Models TREE ##############
if vendor_list != None: if vendor_list is not None and isinstance(vendor_list, list) and vendor_list:
cd_dir(BBF_DMTREE_PATH) cd_dir(BBF_DMTREE_PATH)
vendor = vendor_list.split(",") for vendor in vendor_list:
for i in range(vendor_list.count(',') + 1): vendor_dir = "vendor/" + vendor + "/tr181"
vendor_dir = "vendor/" + vendor[i] + "/tr181"
if os.path.isdir(vendor_dir): if os.path.isdir(vendor_dir):
cd_dir(vendor_dir) cd_dir(vendor_dir)
@ -246,46 +341,89 @@ def generate_supported_dm( remote_dm, vendor_list ):
if os.path.isfile(BBF_TR181_ROOT_FILE): if os.path.isfile(BBF_TR181_ROOT_FILE):
generate_datamodel_tree(BBF_TR181_ROOT_FILE) generate_datamodel_tree(BBF_TR181_ROOT_FILE)
for root, dirs, files in os.walk("."): for _root, _dirs, files in os.walk("."):
for filename in files: for filename in files:
if ".h" in filename or filename == BBF_VENDOR_ROOT_FILE or filename == BBF_TR181_ROOT_FILE: if filename.endswith('.c') is False or filename == BBF_VENDOR_ROOT_FILE or filename == BBF_TR181_ROOT_FILE:
continue continue
generate_datamodel_tree(filename) generate_datamodel_tree(filename)
cd_dir(BBF_DMTREE_PATH) cd_dir(BBF_DMTREE_PATH)
vendor_dir = "vendor/" + vendor + "/tr104"
if os.path.isdir(vendor_dir):
cd_dir(vendor_dir)
for _root, _dirs, files in os.walk("."):
for filename in files:
if filename.endswith('.c') is False:
continue
generate_datamodel_tree(filename)
cd_dir(BBF_DMTREE_PATH)
############## Download && Generate Plugins Data Models ##############
if plugins is not None and isinstance(plugins, list) and plugins:
print("Generating datamodels from defined plugins...")
############## GEN External BBF Data Models TREE ##############
if remote_dm != None:
cd_dir(CURRENT_PATH) cd_dir(CURRENT_PATH)
if isinstance(plugins, list):
for plugin in plugins:
repo = get_option_value(plugin, "repo")
version = get_option_value(plugin, "version")
for i in range(remote_dm.count(',') + 1): remove_folder(".repo")
if os.path.isdir("./.repo" + str(i)): try:
subprocess.run(["git", "clone", repo, ".repo"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check = True)
except (OSError, subprocess.SubprocessError) as _e:
print("Failed to clone %s" % repo)
cmd = 'find ./.repo%s/ -name datamodel.c' % str(i) if version is not None:
files = os.popen(cmd).read() try:
subprocess.run(["git", "-C", ".repo", "checkout", version],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
except (OSError, subprocess.SubprocessError) as _e:
print("Failed to checkout git version %s" % version)
if os.path.isdir(".repo"):
if version is None:
print('├── Processing ' + repo)
else:
print('├── Processing ' + repo + '^' + version)
dm_files = get_option_value(plugin, "dm_files")
if dm_files is not None and isinstance(dm_files, list):
for dm_file in dm_files:
generate_dynamic_datamodel_tree(".repo/" + dm_file)
generate_datamodel_tree(".repo/" + dm_file)
generate_dynamic_json_datamodel_tree(".repo/" + dm_file)
else:
files = os.popen('find .repo/ -name datamodel.c').read()
for file in files.split('\n'): for file in files.split('\n'):
if os.path.isfile(file): if os.path.isfile(file):
generate_dynamic_datamodel_tree(file) generate_dynamic_datamodel_tree(file)
generate_datamodel_tree(file) generate_datamodel_tree(file)
files = os.popen('find .repo/ -name "*.json"').read()
for file in files.split('\n'):
if os.path.isfile(file):
generate_dynamic_json_datamodel_tree(file)
remove_folder(".repo")
print('└── Processing of plugins done')
############## Remove Duplicated Element from List ############## ############## Remove Duplicated Element from List ##############
global LIST_SUPPORTED_DM global LIST_SUPPORTED_DM
LIST_SUPPORTED_DM = list(set(LIST_SUPPORTED_DM)) LIST_SUPPORTED_DM = list(set(LIST_SUPPORTED_DM))
############## Sort all elements in List ############## ############## Sort all elements in List ##############
LIST_SUPPORTED_DM.sort(reverse=False) LIST_SUPPORTED_DM.sort(reverse=False)
############## Back to the current directory ############## ############## Back to the current directory ##############
cd_dir(CURRENT_PATH) cd_dir(CURRENT_PATH)
############### COPY SUPPORTED DATA MODEL TO FILE ###############
############## Remove Remote Data Models ############## remove_file(DATA_MODEL_FILE)
if remote_dm != None: fill_data_model_file()
for i in range(remote_dm.count(',') + 1):
remove_folder("./.repo" + str(i))

1102
tools/convert_dm_json_to_c.py Executable file

File diff suppressed because it is too large Load diff

929
tools/convert_dm_xml_to_json.py Executable file
View file

@ -0,0 +1,929 @@
#!/usr/bin/python3
# Copyright (C) 2020 iopsys Software Solutions AB
# Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com>
import os
import sys
import time
import re
import json
import xml.etree.ElementTree as xml
from collections import OrderedDict
from shutil import copyfile
import bbf_common as bbf
listTypes = ["string",
"unsignedInt",
"unsignedLong",
"int",
"long",
"boolean",
"dateTime",
"hexBinary",
"base64"]
listdataTypes = ["string",
"unsignedInt",
"unsignedLong",
"int",
"long",
"boolean",
"dateTime",
"hexBinary",
"base64",
"IPAddress",
"IPv4Address",
"IPv6Address",
"IPPrefix",
"IPv4Prefix",
"IPv6Prefix",
"MACAddress",
"decimal",
"IoTDeviceType",
"IoTLevelType",
"IoTUnitType",
"IoTEnumSensorType",
"IoTEnumControlType"]
def getname(objname):
global model_root_name
OBJSname = objname
if (objname.count('.') > 1 and (objname.count('.') != 2 or objname.count('{i}') != 1)):
OBJSname = objname.replace(dmroot1.get('name'), "", 1)
OBJSname = OBJSname.replace("{i}", "")
OBJSname = OBJSname.replace(".", "")
if objname.count('.') == 1:
model_root_name = OBJSname
OBJSname = "Root" + OBJSname
return OBJSname
if (objname.count('.') == 2 and objname.count('{i}') == 1):
model_root_name = OBJSname
OBJSname = "Services" + OBJSname
return OBJSname
return OBJSname
def getparamtype(dmparam):
ptype = None
for s in dmparam:
if s.tag == "syntax":
for c in s:
if c.tag == "list":
ptype = "string"
break
if c.tag == "dataType":
reftype = c.get("ref")
if "StatsCounter" in reftype:
ptype = "unsignedInt"
break
ptype = "string"
break
ptype = c.tag
break
break
if ptype is None:
ptype = "__NA__"
return ptype
def getMinMaxEnumerationUnitPatternparam(paramtype, c):
paramvalrange = None
paramenum = None
paramunit = None
parampattern = None
if paramtype == "string" or paramtype == "hexBinary" or paramtype == "base64":
for cc in c:
if cc.tag == "size":
if paramvalrange is None:
paramvalrange = "%s,%s" % (
cc.get("minLength"), cc.get("maxLength"))
else:
paramvalrange = "%s;%s,%s" % (
paramvalrange, cc.get("minLength"), cc.get("maxLength"))
if cc.tag == "enumeration":
if paramenum is None:
paramenum = "\"%s\"" % cc.get('value')
else:
paramenum = "%s, \"%s\"" % (paramenum, cc.get('value'))
if cc.tag == "pattern":
if parampattern is None:
parampattern = "\"%s\"" % cc.get('value')
elif cc.get('value') != "":
parampattern = "%s,\"%s\"" % (
parampattern, cc.get('value'))
elif paramtype == "unsignedInt" or paramtype == "int" or paramtype == "unsignedLong" or paramtype == "long":
for cc in c:
if cc.tag == "range":
if paramvalrange is None:
paramvalrange = "%s,%s" % (
cc.get("minInclusive"), cc.get("maxInclusive"))
else:
paramvalrange = "%s;%s,%s" % (paramvalrange, cc.get(
"minInclusive"), cc.get("maxInclusive"))
if cc.tag == "units":
paramunit = cc.get("value")
return paramvalrange, paramenum, paramunit, parampattern
def getparamdatatyperef(datatyperef):
paramvalrange = None
paramenum = None
paramunit = None
parampattern = None
for d in xmlroot1:
if d.tag == "dataType" and d.get("name") == datatyperef:
if d.get("base") != "" and d.get("base") is not None and d.get("name") == "Alias":
paramvalrange, paramenum, paramunit, parampattern = getparamdatatyperef(
d.get("base"))
else:
for dd in d:
if dd.tag in listTypes:
paramvalrange, paramenum, paramunit, parampattern = getMinMaxEnumerationUnitPatternparam(
dd.tag, dd)
break
if dd.tag == "size":
if paramvalrange is None:
paramvalrange = "%s,%s" % (
dd.get("minLength"), dd.get("maxLength"))
else:
paramvalrange = "%s;%s,%s" % (
paramvalrange, dd.get("minLength"), dd.get("maxLength"))
if dd.tag == "enumeration":
if paramenum is None:
paramenum = "\"%s\"" % dd.get('value')
else:
paramenum = "%s, \"%s\"" % (
paramenum, dd.get('value'))
if dd.tag == "pattern":
if parampattern is None:
parampattern = "\"%s\"" % dd.get('value')
elif dd.get('value') != "":
parampattern = "%s,\"%s\"" % (
parampattern, dd.get('value'))
break
return paramvalrange, paramenum, paramunit, parampattern
def getparamlist(dmparam):
minItem = None
maxItem = None
maxsize = None
minItem = dmparam.get("minItems")
maxItem = dmparam.get("maxItems")
for cc in dmparam:
if cc.tag == "size":
maxsize = cc.get("maxLength")
return minItem, maxItem, maxsize
def getparamoption(dmparam):
datatype = None
paramvalrange = None
paramenum = None
paramunit = None
parampattern = None
listminItem = None
listmaxItem = None
listmaxsize = None
islist = 0
for s in dmparam:
if s.tag == "syntax":
for c in s:
if c.tag == "list":
islist = 1
listminItem, listmaxItem, listmaxsize = getparamlist(c)
for c1 in s:
datatype = c1.tag if c1.tag in listdataTypes else None
if datatype is not None:
paramvalrange, paramenum, paramunit, parampattern = getMinMaxEnumerationUnitPatternparam(
datatype, c1)
break
if c1.tag == "dataType":
datatype = c1.get("ref")
paramvalrange, paramenum, paramunit, parampattern = getparamdatatyperef(
c1.get("ref"))
break
if islist == 0:
datatype = c.tag if c.tag in listdataTypes else None
if datatype is not None:
paramvalrange, paramenum, paramunit, parampattern = getMinMaxEnumerationUnitPatternparam(
datatype, c)
break
if c.tag == "dataType":
datatype = c.get("ref")
paramvalrange, paramenum, paramunit, parampattern = getparamdatatyperef(
datatype)
break
break
return islist, datatype, paramvalrange, paramenum, paramunit, parampattern, listminItem, listmaxItem, listmaxsize
listmapping = []
def generatelistfromfile(dmobject):
obj = dmobject.get('name').split(".")
if "tr-104" in sys.argv[1]:
pathfilename = "../dmtree/tr104/" + obj[1].lower() + ".c"
pathiopsyswrtfilename = "../dmtree/tr104/" + \
obj[1].lower() + "-iopsyswrt.c"
else:
pathfilename = "../dmtree/tr181/" + obj[1].lower() + ".c"
pathiopsyswrtfilename = "../dmtree/tr181/" + \
obj[1].lower() + "-iopsyswrt.c"
for x in range(0, 2):
pathfile = pathfilename if x == 0 else pathiopsyswrtfilename
exists = os.path.isfile(pathfile)
if exists:
filec = open(pathfile, "r")
for linec in filec:
if "/*#" in linec:
listmapping.append(linec)
else:
pass
def getparammapping(dmobject, dmparam):
hasmapping = 0
mapping = ""
if "tr-104" in sys.argv[1]:
param = "Device.Services." + dmobject.get('name') + dmparam.get('name')
else:
param = dmobject.get('name') + dmparam.get('name')
for value in listmapping:
if param in value:
hasmapping = 1
config_type = value.split("!")
mapping = config_type[1]
mapping = mapping.replace("*/\n", "")
break
return hasmapping, mapping
def getobjmapping(dmobject):
hasmapping = 0
mapping = ""
if "tr-104" in sys.argv[1]:
obj = "Device.Services." + dmobject.get('name')
else:
obj = dmobject.get('name')
for value in listmapping:
config_type = value.split("!")
mapping = config_type[0]
mapping = mapping.replace("/*#", "")
if obj == mapping:
hasmapping = 1
mapping = config_type[1]
mapping = mapping.replace("*/\n", "")
break
return hasmapping, mapping
def objhaschild(parentname, level, check_obj):
hasobj = 0
model = model2 if check_obj == 0 else model1
for c in model:
objname = c.get('name')
if c.tag == "object" and parentname in objname and (objname.count('.') - objname.count('{i}')) == level:
hasobj = 1
break
return hasobj
def objhasparam(dmobject):
hasparam = 0
for c in dmobject:
if c.tag == "parameter":
hasparam = 1
break
return hasparam
def getuniquekeys(dmobject):
uniquekeys = None
for c in dmobject:
if c.tag == "uniqueKey":
for s in c:
if s.tag == "parameter":
if uniquekeys is None:
uniquekeys = "\"%s\"" % s.get('ref')
else:
uniquekeys = uniquekeys + "," + "\"%s\"" % s.get('ref')
return uniquekeys
def printopenobject(obj):
fp = open('./.json_tmp', 'a')
if "tr-104" in sys.argv[1] or "tr-135" in sys.argv[1]:
print("\"Device.Services.%s\" : {" % obj.get(
'name').replace(" ", ""), file=fp)
else:
print("\"%s\" : {" % obj.get('name').replace(" ", ""), file=fp)
fp.close()
def printopenfile():
fp = open('./.json_tmp', 'a')
print("{", file=fp)
fp.close()
def printclosefile():
fp = open('./.json_tmp', 'a')
print("}", file=fp)
fp.close()
def printOBJMaPPING(mapping):
fp = open('./.json_tmp', 'a')
config_type = mapping.split(":")
config = config_type[1].split("/")
print("\"mapping\": {", file=fp)
print("\"type\": \"%s\"," % config_type[0].lower(), file=fp)
print("\"%s\": {" % config_type[0].lower(), file=fp)
# UCI
if config_type[0] == "UCI":
print("\"file\": \"%s\"," % config[0], file=fp)
print("\"section\": {", file=fp)
print("\"type\": \"%s\"" % config[1], file=fp)
print("},", file=fp)
print("\"dmmapfile\": \"%s\"" % config[2], file=fp)
# UBUS
elif config_type[0] == "UBUS":
print("\"object\": \"%s\"," % config[0], file=fp)
print("\"method\": \"%s\"," % config[1], file=fp)
print("\"args\": {", file=fp)
if config[2] != "":
args = config[2].split(",")
print("\"%s\": \"%s\"" % (args[0], args[1]), file=fp)
print("}", file=fp)
print("\"key\": \"%s\"" % config[3], file=fp)
print("}\n}", file=fp)
fp.close()
def printPARAMMaPPING(mapping):
fp = open('./.json_tmp', 'a')
lst = mapping.split("&")
print("\"mapping\": [", file=fp)
for i in range(len(lst)):
config_type = lst[i].split(":")
config = config_type[1].split("/")
print("{", file=fp)
print("\"type\": \"%s\"," % config_type[0].lower(), file=fp)
# SYSFS || PROCFS
if config_type[0] == "SYSFS" or config_type[0] == "PROCFS":
print("\"file\": \"%s\"" % config_type[1], file=fp)
# UCI, UBUS, CLI
else:
# Only for UCI, UBUS, CLI
print("\"%s\": {" % config_type[0].lower(), file=fp)
# UCI
if config_type[0] == "UCI":
print("\"file\": \"%s\"," % config[0], file=fp)
print("\"section\": {", file=fp)
var = config[1].split(",")
if len(var) == 1:
print("\"type\": \"%s\"" % var[0], file=fp)
elif len(var) > 1 and "@i" in var[1]:
print("\"type\": \"%s\"," % var[0], file=fp)
print("\"index\": \"%s\"" % var[1], file=fp)
elif len(var) > 1:
print("\"type\": \"%s\"," % var[0], file=fp)
print("\"name\": \"%s\"" % var[1], file=fp)
print("}", file=fp)
if len(var) > 1:
print("\"option\": {", file=fp)
print("\"name\": \"%s\"" % config[2], file=fp)
print("}", file=fp)
# UBUS
elif config_type[0] == "UBUS":
print("\"object\": \"%s\"," % config[0], file=fp)
print("\"method\": \"%s\"," % config[1], file=fp)
print("\"args\": {", file=fp)
if config[2] != "":
args = config[2].split(",")
print("\"%s\": \"%s\"" % (args[0], args[1]), file=fp)
print("}", file=fp)
print("\"key\": \"%s\"" % config[3], file=fp)
# CLI
elif config_type[0] == "CLI":
print("\"command\": \"%s\"," % config[0], file=fp)
print("\"args\": \"%s\"" % config[1], file=fp)
print("}", file=fp)
print("}", file=fp)
print("]\n}", file=fp)
fp.close()
def removelastline():
file = open("./.json_tmp")
lines = file.readlines()
lines = lines[:-1]
file.close()
w = open("./.json_tmp", 'w')
w.writelines(lines)
w.close()
printclosefile()
def replace_data_in_file(data_in, data_out):
file_r = open("./.json_tmp", "rt")
file_w = open("./.json_tmp_1", "wt")
text = ''.join(file_r).replace(data_in, data_out)
file_w.write(text)
file_r.close()
file_w.close()
copyfile("./.json_tmp_1", "./.json_tmp")
bbf.remove_file("./.json_tmp_1")
def updatejsontmpfile():
replace_data_in_file("}\n", "},\n")
replace_data_in_file("},\n},", "}\n},")
replace_data_in_file("}\n},\n},", "}\n}\n},")
replace_data_in_file("}\n},\n}\n},", "}\n}\n}\n},")
replace_data_in_file("}\n},\n}\n}\n},", "}\n}\n}\n}\n},")
replace_data_in_file("}\n}\n}\n},\n}\n},", "}\n}\n}\n}\n}\n},")
replace_data_in_file("}\n}\n}\n}\n}\n}\n},", "}\n}\n}\n}\n}\n}\n},")
replace_data_in_file("}\n}\n}\n},\n}\n}\n}\n},", "}\n}\n}\n}\n}\n}\n}\n},")
replace_data_in_file("},\n]", "}\n]")
def removetmpfiles():
bbf.remove_file("./.json_tmp")
bbf.remove_file("./.json_tmp_1")
def printOBJ(dmobject, hasobj, hasparam, bbfdm_type):
uniquekeys = getuniquekeys(dmobject)
hasmapping, mapping = getobjmapping(dmobject)
if (dmobject.get('name')).endswith(".{i}."):
fbrowse = "true"
else:
fbrowse = "false"
fp = open('./.json_tmp', 'a')
print("\"type\" : \"object\",", file=fp)
print("\"version\" : \"%s\"," % dmobject.get('version'), file=fp)
print("\"protocols\" : [%s]," % bbfdm_type, file=fp)
if uniquekeys is not None:
print("\"uniqueKeys\" : [%s]," % uniquekeys, file=fp)
if dmobject.get('access') == "readOnly":
print("\"access\" : false,", file=fp)
else:
print("\"access\" : true,", file=fp)
if hasparam or hasobj:
print("\"array\" : %s," % fbrowse, file=fp)
else:
print("\"array\" : %s" % fbrowse, file=fp)
fp.close()
if hasmapping:
printOBJMaPPING(mapping)
def printPARAM(dmparam, dmobject, bbfdm_type):
hasmapping, mapping = getparammapping(dmobject, dmparam)
islist, datatype, paramvalrange, paramenum, paramunit, parampattern, listminItem, listmaxItem, listmaxsize = getparamoption(
dmparam)
fp = open('./.json_tmp', 'a')
print("\"%s\" : {" % dmparam.get('name').replace(" ", ""), file=fp)
print("\"type\" : \"%s\"," % getparamtype(dmparam), file=fp)
print("\"read\" : true,", file=fp)
print("\"write\" : %s," % ("false" if dmparam.get(
'access') == "readOnly" else "true"), file=fp)
print("\"version\" : \"%s\"," % dmparam.get('version'), file=fp)
print("\"protocols\" : [%s]," % bbfdm_type, file=fp)
# create list
if islist == 1:
print("\"list\" : {", file=fp)
# add datatype
print(("\"datatype\" : \"%s\"," % datatype) if (listmaxsize is not None or listminItem is not None or listmaxItem is not None or paramvalrange is not None or paramunit is not
None or paramenum is not None or parampattern is not None or (hasmapping and islist == 0)) else ("\"datatype\" : \"%s\"" % datatype), file=fp)
if islist == 1:
# add maximum size of list
if listmaxsize is not None:
print(("\"maxsize\" : %s," % listmaxsize) if (listminItem is not None or listmaxItem is not None or paramvalrange is not None
or paramunit is not None or paramenum is not None or parampattern is not None) else ("\"maxsize\" : %s" % listmaxsize), file=fp)
# add minimun and maximum item values
if listminItem is not None and listmaxItem is not None:
print("\"item\" : {", file=fp)
print("\"min\" : %s," % listminItem, file=fp)
print("\"max\" : %s" % listmaxItem, file=fp)
print(("},") if (paramvalrange is not None or paramunit is not None
or paramenum is not None or parampattern is not None) else ("}"), file=fp)
elif listminItem is not None and listmaxItem is None:
print("\"item\" : {", file=fp)
print("\"min\" : %s" % listminItem, file=fp)
print(("},") if (paramvalrange is not None or paramunit is not None
or paramenum is not None or parampattern is not None) else ("}"), file=fp)
elif listminItem is None and listmaxItem is not None:
print("\"item\" : {", file=fp)
print("\"max\" : %s" % listmaxItem, file=fp)
print(("},") if (paramvalrange is not None or paramunit is not None
or paramenum is not None or parampattern is not None) else ("}"), file=fp)
# add minimun and maximum values
if paramvalrange is not None:
valranges = paramvalrange.split(";")
print("\"range\" : [", file=fp)
for eachvalrange in valranges:
valrange = eachvalrange.split(",")
if valrange[0] != "None" and valrange[1] != "None":
print("{", file=fp)
print("\"min\" : %s," % valrange[0], file=fp)
print("\"max\" : %s" % valrange[1], file=fp)
print(("},") if (eachvalrange ==
valranges[len(valranges)-1]) else ("}"), file=fp)
elif valrange[0] != "None" and valrange[1] == "None":
print("{", file=fp)
print("\"min\" : %s" % valrange[0], file=fp)
print(("},") if (eachvalrange ==
valranges[len(valranges)-1]) else ("}"), file=fp)
elif valrange[0] == "None" and valrange[1] != "None":
print("{", file=fp)
print("\"max\" : %s" % valrange[1], file=fp)
print(("},") if (eachvalrange ==
valranges[len(valranges)-1]) else ("}"), file=fp)
print(("],") if (paramunit is not None or paramenum is not None or parampattern is not None
or (hasmapping and islist == 0)) else ("]"), file=fp)
# add unit
if paramunit is not None:
print(("\"unit\" : \"%s\"," % paramunit) if (paramenum is not None or parampattern is not None or (
hasmapping and islist == 0)) else ("\"unit\" : \"%s\"" % paramunit), file=fp)
# add enumaration
if paramenum is not None:
print(("\"enumerations\" : [%s]," % paramenum) if (parampattern is not None or (
hasmapping and islist == 0)) else ("\"enumerations\" : [%s]" % paramenum), file=fp)
# add pattern
if parampattern is not None:
print(("\"pattern\" : [%s]," % parampattern.replace("\\", "\\\\")) if (
hasmapping and islist == 0) else ("\"pattern\" : [%s]" % parampattern.replace("\\", "\\\\")), file=fp)
# close list
if islist == 1:
print(("},") if hasmapping else ("}"), file=fp)
# add mapping
if hasmapping:
fp.close()
printPARAMMaPPING(mapping)
else:
print("}", file=fp)
fp.close()
def printCOMMAND(dmparam, dmobject, _bbfdm_type):
fp = open('./.json_tmp', 'a')
print("\"%s\" : {" % dmparam.get('name'), file=fp)
print("\"type\" : \"command\",", file=fp)
print("\"async\" : %s," %
("true" if dmparam.get('async') is not None else "false"), file=fp)
print("\"version\" : \"%s\"," % dmparam.get('version'), file=fp)
inputfound = 0
outputfound = 0
for c in dmparam:
if c.tag == "input":
inputfound = 1
elif c.tag == "output":
outputfound = 1
print(("\"protocols\" : [\"usp\"],") if (inputfound or outputfound) else (
"\"protocols\" : [\"usp\"]"), file=fp)
for c in dmparam:
if c.tag == "input":
print("\"input\" : {", file=fp)
for param in c:
if param.tag == "parameter":
fp.close()
printPARAM(param, dmobject, "\"usp\"")
fp = open('./.json_tmp', 'a')
print("}" if outputfound else "},", file=fp)
if c.tag == "output":
print("\"output\" : {", file=fp)
for param in c:
if param.tag == "parameter":
fp.close()
printPARAM(param, dmobject, "\"usp\"")
fp = open('./.json_tmp', 'a')
print("}", file=fp)
print("}", file=fp)
fp.close()
def printusage():
print("Usage: " +
sys.argv[0] + " <tr-xxx cwmp xml data model> <tr-xxx usp xml data model> [Object path]")
print("Examples:")
print(" - " + sys.argv[0] +
" tr-181-2-14-1-cwmp-full.xml tr-181-2-14-1-usp-full.xml Device.")
print(" ==> Generate the json file of the sub tree Device. in tr181.json")
print(" - " + sys.argv[0] +
" tr-104-2-0-2-cwmp-full.xml tr-104-2-0-2-usp-full.xml Device.Services.VoiceService.")
print(" ==> Generate the json file of the sub tree Device.Services.VoiceService. in tr104.json")
print(" - " + sys.argv[0] + " tr-106-1-2-0-full.xml Device.")
print(" ==> Generate the json file of the sub tree Device. in tr106.json")
print("")
print("Example of xml data model file: https://www.broadband-forum.org/cwmp/tr-181-2-14-1-cwmp-full.xml")
exit(1)
def getobjectpointer(objname):
obj = None
for c in model1:
if c.tag == "object" and (c.get('name') == objname or c.get('name') == (objname + "{i}.")):
obj = c
break
return obj
def chech_each_obj_with_other_obj(m1, m2):
for c in m2:
if c.tag == "object":
found = 0
for obj in m1:
if obj.tag == "object" and (obj.get('name') == c.get('name')):
found = 1
break
if found == 0:
if c.get('name').count(".") - (c.get('name')).count("{i}.") != 2:
continue
dmlevel = (c.get('name')).count(".") - \
(c.get('name')).count("{i}.") + 1
printopenobject(c)
object_parse_childs(c, dmlevel, 0, 0)
printclosefile()
def check_if_obj_exist_in_other_xml_file(objname):
obj = None
found = 0
for c in model2:
if c.tag == "object" and (c.get('name') == objname.get('name')):
obj = c
found = 1
break
return obj, found
def chech_current_param_exist_in_other_obj(obj, c):
bbfdm_type = ""
for param in obj:
if param.tag == "parameter" and param.get('name') == c.get('name'):
bbfdm_type = "\"cwmp\", \"usp\""
break
if bbfdm_type == "" and "cwmp" in sys.argv[1]:
bbfdm_type = "\"cwmp\""
elif bbfdm_type == "" and "usp" in sys.argv[1]:
bbfdm_type = "\"usp\""
return bbfdm_type
def chech_obj_with_other_obj(obj, dmobject):
for c in obj:
exist = 0
if c.tag == "parameter":
for param in dmobject:
if param.tag == "parameter" and c.get('name') == param.get('name'):
exist = 1
break
if exist == 0 and "cwmp" in sys.argv[1]:
printPARAM(c, obj, "\"usp\"")
elif exist == 0 and "usp" in sys.argv[1]:
printPARAM(c, obj, "\"cwmp\"")
if c.tag == "command":
printCOMMAND(c, obj, "\"usp\"")
def object_parse_childs(dmobject, level, generatelist, check_obj):
if generatelist == 0 and (dmobject.get('name')).count(".") == 2:
generatelistfromfile(dmobject)
if check_obj == 1 and ("tr-181" in sys.argv[1] or "tr-104" in sys.argv[1]):
obj, exist = check_if_obj_exist_in_other_xml_file(dmobject)
hasobj = objhaschild(dmobject.get('name'), level, check_obj)
hasparam = objhasparam(dmobject)
if check_obj == 1 and "tr-181" in sys.argv[1] and exist == 0:
printOBJ(dmobject, hasobj, hasparam, "\"cwmp\"")
elif check_obj == 0 and "tr-181" in sys.argv[1]:
printOBJ(dmobject, hasobj, hasparam, "\"usp\"")
else:
printOBJ(dmobject, hasobj, hasparam, "\"cwmp\", \"usp\"")
if hasparam:
for c in dmobject:
if c.tag == "parameter":
if check_obj == 1 and "tr-181" in sys.argv[1] and exist == 1:
bbfdm_type = chech_current_param_exist_in_other_obj(obj, c)
elif check_obj == 1 and "tr-181" in sys.argv[1] and exist == 0:
bbfdm_type = "\"cwmp\""
elif check_obj == 0:
bbfdm_type = "\"usp\""
else:
bbfdm_type = "\"cwmp\", \"usp\""
printPARAM(c, dmobject, bbfdm_type)
if c.tag == "command":
printCOMMAND(c, dmobject, "\"usp\"")
if check_obj == 1 and "tr-181" in sys.argv[1] and exist == 1:
chech_obj_with_other_obj(obj, dmobject)
if hasobj and check_obj:
for c in model1:
objname = c.get('name')
if c.tag == "object" and dmobject.get('name') in objname and (objname.count('.') - objname.count('{i}')) == level:
printopenobject(c)
object_parse_childs(c, level+1, 0, 1)
printclosefile()
if hasobj and check_obj == 0:
for c in model2:
objname = c.get('name')
if c.tag == "object" and dmobject.get('name') in objname and (objname.count('.') - objname.count('{i}')) == level:
printopenobject(c)
object_parse_childs(c, level+1, 0, 0)
printclosefile()
return
def generatejsonfromobj(pobj, pdir):
generatelist = 0
bbf.create_folder(pdir)
removetmpfiles()
dmlevel = (pobj.get('name')).count(".") - \
(pobj.get('name')).count("{i}.") + 1
if (pobj.get('name')).count(".") == 1:
generatelist = 0
else:
generatelistfromfile(pobj)
generatelist = 1
printopenfile()
printopenobject(pobj)
object_parse_childs(pobj, dmlevel, generatelist, 1)
if "tr-181" in sys.argv[1] and Root.count(".") == 1:
chech_each_obj_with_other_obj(model1, model2)
if "tr-181" in sys.argv[1] and pobj.get("name").count(".") == 1:
dmfp = open(pdir + "/tr181.json", "a")
elif "tr-104" in sys.argv[1] and pobj.get("name").count(".") == 2:
dmfp = open(pdir + "/tr104.json", "a")
elif "tr-135" in sys.argv[1] and pobj.get("name").count(".") == 2:
dmfp = open(pdir + "/tr135.json", "a")
elif "tr-106" in sys.argv[1] and pobj.get("name").count(".") == 1:
dmfp = open(pdir + "/tr106.json", "a")
else:
dmfp = open(pdir + "/" + (getname(pobj.get('name'))
).lower() + ".json", "a")
printclosefile()
printclosefile()
updatejsontmpfile()
removelastline()
f = open("./.json_tmp", "r")
obj = json.load(f, object_pairs_hook=OrderedDict)
dump = json.dumps(obj, indent=4)
tabs = re.sub('\n +', lambda match: '\n' + '\t' *
int(len(match.group().strip('\n')) / 4), dump)
try:
print("%s" % tabs, file=dmfp)
dmfp.close()
except IOError:
pass
removetmpfiles()
return dmfp.name
### main ###
if len(sys.argv) < 4:
printusage()
if (sys.argv[1]).lower() == "-h" or (sys.argv[1]).lower() == "--help":
printusage()
is_service_model = 0
model_root_name = "Root"
tree1 = xml.parse(sys.argv[1])
xmlroot1 = tree1.getroot()
model1 = xmlroot1
for child in model1:
if child.tag == "model":
model1 = child
if model1.tag != "model":
print("Wrong %s XML Data model format!" % sys.argv[1])
exit(1)
dmroot1 = None
for dr in model1:
if dr.tag == "object" and dr.get("name").count(".") == 1:
dmroot1 = dr
break
# If it is service data model
if dmroot1 is None:
is_service_model = 1
for dr in model1:
if dr.tag == "object" and dr.get("name").count(".") == 2:
dmroot1 = dr
break
if dmroot1 is None:
print("Wrong %s XML Data model format!" % sys.argv[1])
exit(1)
if "tr-181" in sys.argv[1] or "tr-104" in sys.argv[1]:
tree2 = xml.parse(sys.argv[2])
xmlroot2 = tree2.getroot()
model2 = xmlroot2
for child in model2:
if child.tag == "model":
model2 = child
if model2.tag != "model":
print("Wrong %s XML Data model format!" % sys.argv[2])
exit(1)
dmroot2 = None
for dr in model2:
if dr.tag == "object" and dr.get("name").count(".") == 1:
dmroot2 = dr
break
# If it is service data model
if dmroot2 is None:
for dr in model2:
if dr.tag == "object" and dr.get("name").count(".") == 2:
dmroot2 = dr
break
if dmroot2 is None:
print("Wrong %s XML Data model format!" % sys.argv[2])
exit(1)
Root = sys.argv[3]
if "tr-181" in sys.argv[1]:
gendir = "tr181_" + time.strftime("%Y-%m-%d_%H-%M-%S")
elif "tr-104" in sys.argv[1]:
gendir = "tr104_" + time.strftime("%Y-%m-%d_%H-%M-%S")
Root = (sys.argv[3])[len("Device.Services."):]
elif "tr-135" in sys.argv[1]:
gendir = "tr135_" + time.strftime("%Y-%m-%d_%H-%M-%S")
Root = (sys.argv[3])[len("Device.Services."):]
elif "tr-106" in sys.argv[1]:
gendir = "tr106_" + time.strftime("%Y-%m-%d_%H-%M-%S")
else:
gendir = "source_" + time.strftime("%Y-%m-%d_%H-%M-%S")
objstart = getobjectpointer(Root)
if objstart is None:
print("Wrong Object Name! %s" % Root)
exit(1)
filename = generatejsonfromobj(objstart, gendir)
print(filename)

119
tools/generate_dm.py Executable file
View file

@ -0,0 +1,119 @@
#!/usr/bin/python3
# Copyright (C) 2021 iopsys Software Solutions AB
# Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com>
import sys
import json
import bbf_common as bbf
import generate_dm_xml as bbf_xml
import generate_dm_excel as bbf_excel
def print_dm_usage():
print("Usage: " + sys.argv[0] + " <input json file>")
print("Examples:")
print(" - " + sys.argv[0] + " tools_input.json")
print(" ==> Generate all required files defined in tools_input.json file")
print("")
exit(1)
def get_vendor_list(val):
vendor_list = ""
if isinstance(val, list):
for vendor in val:
vendor_list = vendor if not vendor_list else (
vendor_list + "," + vendor)
return vendor_list
### main ###
if len(sys.argv) < 2:
print_dm_usage()
VENDOR_PREFIX = None
VENDOR_LIST = None
PLUGINS = None
OUTPUT = None
json_file = open(sys.argv[1], "r")
json_data = json.loads(json_file.read())
for option, value in json_data.items():
if option is None:
print("!!!! %s : Wrong JSON format!" % sys.argv[1])
exit(1)
if option == "manufacturer":
bbf_xml.MANUFACTURER = value
continue
if option == "protocol":
bbf_xml.DEVICE_PROTOCOL = value
continue
if option == "manufacturer_oui":
bbf_xml.MANUFACTURER_OUI = value
continue
if option == "product_class":
bbf_xml.PRODUCT_CLASS = value
continue
if option == "model_name":
bbf_xml.MODEL_NAME = value
continue
if option == "software_version":
bbf_xml.SOFTWARE_VERSION = value
continue
if option == "vendor_prefix":
VENDOR_PREFIX = value
continue
if option == "vendor_list":
VENDOR_LIST = value
continue
if option == "plugins":
PLUGINS = value
continue
if option == "output":
OUTPUT = value
continue
bbf.generate_supported_dm(VENDOR_PREFIX, VENDOR_LIST, PLUGINS)
file_format = bbf.get_option_value(OUTPUT, "file_format", ['xml'])
output_file_prefix = bbf.get_option_value(OUTPUT, "output_file_prefix", "datamodel")
output_dir = bbf.get_option_value(OUTPUT, "output_dir", "./out")
bbf.create_folder(output_dir)
if isinstance(file_format, list):
for _format in file_format:
if _format == "xml":
acs = bbf.get_option_value(OUTPUT, "acs", ['default'])
if isinstance(acs, list):
for acs_format in acs:
bbf.clean_supported_dm_list()
output_file_name = output_dir + '/' + output_file_prefix + '_' + acs_format + '.xml'
if acs_format == "hdm":
bbf_xml.generate_xml('HDM', output_file_name)
if acs_format == "default":
bbf_xml.generate_xml('default', output_file_name)
if _format == "xls":
bbf.clean_supported_dm_list()
output_file_name = output_dir + '/' + output_file_prefix + '.xls'
bbf_excel.generate_excel(['tr181', 'tr104'], output_file_name)
bbf.remove_file(bbf.DATA_MODEL_FILE)
print("Datamodel generation completed, aritifacts shall be available in out directory or as per input json configuration")

File diff suppressed because it is too large Load diff

View file

@ -3,42 +3,18 @@
# Copyright (C) 2021 iopsys Software Solutions AB # Copyright (C) 2021 iopsys Software Solutions AB
# Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com> # Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com>
import os
import sys
import getopt
import json
import xlwt
from xlwt import Workbook
from collections import OrderedDict from collections import OrderedDict
import os
import json
import argparse
import xlwt
import bbf_common as bbf import bbf_common as bbf
BBF_REMOTE_DM = None
BBF_VENDOR_LIST = None
EXCEL_FILE = "datamodel.xls"
LIST_DM = [] LIST_DM = []
def print_dmexcel_usage(): def getprotocols(value):
print("Usage: " + sys.argv[0] + " <data model name> [options...] <urls>")
print("data model name: The data model(s) to be used, for ex: tr181 or tr181,tr104")
print("Options: ")
print(" -r, --remote-dm Check OBJ/PARAM under these repositories if it is not found under bbf repo")
print(" -v, --vendor-list Generate data model tree with vendor extension OBJ/PARAM")
print(" -p, --vendor-prefix Generate data model tree using this vendor prefix. Default vendor prefix: %s" % bbf.BBF_VENDOR_PREFIX)
print(" -h, --help This help text")
print("Urls: ")
print(" url^(branch,hash,tag) The url with branch, hash or tag to be used")
print("")
print("Examples: ")
print(" - python " + sys.argv[0] + " tr181")
print(" ==> Generate excel file in %s" % EXCEL_FILE)
print(" - python " + sys.argv[0] + " tr104")
print(" ==> Generate excel file in %s" % EXCEL_FILE)
print(" - python " + sys.argv[0] + " tr181,tr104 -r https://dev.iopsys.eu/feed/iopsys.git^release-5.3,https://dev.iopsys.eu/iopsys/mydatamodel.git^5c8e7cb740dc5e425adf53ea574fb529d2823f88")
print(" ==> Generate excel file in %s" % EXCEL_FILE)
print(" - python " + sys.argv[0] + " tr181,tr104 -v iopsys,openwrt,test -r https://dev.iopsys.eu/feed/iopsys.git^6.0.0ALPHA1 -p X_TEST_COM_")
print(" ==> Generate excel file in %s" % EXCEL_FILE)
def getprotocols( value ):
if isinstance(value, dict): if isinstance(value, dict):
for obj, val in value.items(): for obj, val in value.items():
if obj == "protocols" and isinstance(val, list): if obj == "protocols" and isinstance(val, list):
@ -50,7 +26,8 @@ def getprotocols( value ):
return "CWMP" return "CWMP"
return "CWMP+USP" return "CWMP+USP"
def check_param_obj( dmobject ):
def check_param_obj(dmobject):
for value in bbf.LIST_SUPPORTED_DM: for value in bbf.LIST_SUPPORTED_DM:
obj = value.split(",") obj = value.split(",")
if obj[0] == dmobject: if obj[0] == dmobject:
@ -58,7 +35,8 @@ def check_param_obj( dmobject ):
return "Yes" return "Yes"
return "No" return "No"
def check_commands( param ):
def check_commands(param):
cmd = 'awk \'/static const struct op_cmd operate_helper/,/^};$/\' ../dmoperate.c' cmd = 'awk \'/static const struct op_cmd operate_helper/,/^};$/\' ../dmoperate.c'
param = param.replace(".{i}.", ".*.").replace("()", "") param = param.replace(".{i}.", ".*.").replace("()", "")
@ -67,10 +45,12 @@ def check_commands( param ):
return "Yes" if string in res else "No" return "Yes" if string in res else "No"
def add_data_to_list_dm( obj, supported, protocols, types ):
def add_data_to_list_dm(obj, supported, protocols, types):
LIST_DM.append(obj + "," + protocols + "," + supported + "," + types) LIST_DM.append(obj + "," + protocols + "," + supported + "," + types)
def parse_standard_object( dmobject , value ):
def parse_standard_object(dmobject, value):
hasobj = bbf.obj_has_child(value) hasobj = bbf.obj_has_child(value)
hasparam = bbf.obj_has_param(value) hasparam = bbf.obj_has_param(value)
@ -78,80 +58,89 @@ def parse_standard_object( dmobject , value ):
add_data_to_list_dm(dmobject, supported, getprotocols(value), "object") add_data_to_list_dm(dmobject, supported, getprotocols(value), "object")
if hasparam: if hasparam:
if isinstance(value,dict): if isinstance(value, dict):
for k,v in value.items(): for k, v in value.items():
if k == "mapping": if k == "mapping":
continue continue
if isinstance(v,dict): if isinstance(v, dict):
for k1,v1 in v.items(): for k1, v1 in v.items():
if k1 == "type" and v1 != "object": if k1 == "type" and v1 != "object":
if "()" in k: if "()" in k:
supported = check_commands(dmobject + k) supported = check_commands(dmobject + k)
add_data_to_list_dm(dmobject + k, supported, getprotocols(v), "operate") add_data_to_list_dm(
dmobject + k, supported, getprotocols(v), "operate")
else: else:
supported = check_param_obj(dmobject + k) supported = check_param_obj(dmobject + k)
add_data_to_list_dm(dmobject + k, supported, getprotocols(v), "parameter") add_data_to_list_dm(
dmobject + k, supported, getprotocols(v), "parameter")
break break
if hasobj: if hasobj:
if isinstance(value,dict): if isinstance(value, dict):
for k,v in value.items(): for k, v in value.items():
if isinstance(v,dict): if isinstance(v, dict):
for k1,v1 in v.items(): for k1, v1 in v.items():
if k1 == "type" and v1 == "object": if k1 == "type" and v1 == "object":
parse_standard_object(k , v) parse_standard_object(k, v)
def parse_dynamic_object(dm_name_list):
if isinstance(dm_name_list, list) is False:
return None
def parse_dynamic_object():
for value in bbf.LIST_SUPPORTED_DM: for value in bbf.LIST_SUPPORTED_DM:
obj = value.split(",") obj = value.split(",")
dm_name = sys.argv[1].split(",") for dm in dm_name_list:
for i in range(sys.argv[1].count(',') + 1):
JSON_FILE = bbf.ARRAY_JSON_FILES.get(dm_name[i], None) JSON_FILE = bbf.ARRAY_JSON_FILES.get(dm, None)
if JSON_FILE == None: if JSON_FILE is None:
continue continue
if "tr181" == dm_name[i] and ".Services." in obj[0]: if dm == "tr181" and ".Services." in obj[0]:
continue continue
if "tr104" == dm_name[i] and ".Services." not in obj[0]: if dm == "tr104" and ".Services." not in obj[0]:
continue continue
type = "object" if obj[2] == "DMT_OBJ" else "parameter" if dm == "tr135" and ".Services." not in obj[0]:
add_data_to_list_dm(obj[0], "Yes", "CWMP+USP", type) continue
def parse_object_tree(): dmType = "object" if obj[2] == "DMT_OBJ" else "parameter"
print("Start Generation of BBF Data Models Excel...") add_data_to_list_dm(obj[0], "Yes", "CWMP+USP", dmType)
print("Please wait...")
dm_name = sys.argv[1].split(",")
for i in range(sys.argv[1].count(',') + 1):
JSON_FILE = bbf.ARRAY_JSON_FILES.get(dm_name[i], None) def parse_object_tree(dm_name_list):
if isinstance(dm_name_list, list) is False:
return None
if JSON_FILE != None: for dm in dm_name_list:
JSON_FILE = bbf.ARRAY_JSON_FILES.get(dm, None)
if JSON_FILE is not None:
file = open(JSON_FILE, "r") file = open(JSON_FILE, "r")
data = json.loads(file.read(), object_pairs_hook=OrderedDict) data = json.loads(file.read(), object_pairs_hook=OrderedDict)
for obj, value in data.items(): for obj, value in data.items():
if obj == None: if obj is None:
print("!!!! %s : Wrong JSON Data model format!" % dm_name[i]) print("!!!! %s : Wrong JSON Data model format!" % dm)
continue continue
parse_standard_object(obj, value) parse_standard_object(obj, value)
else: else:
print("!!!! %s : Data Model doesn't exist" % dm_name[i]) print("!!!! %s : Data Model doesn't exist" % dm)
parse_dynamic_object() parse_dynamic_object(dm_name_list)
def generate_excel_file():
bbf.remove_file(EXCEL_FILE) def generate_excel_file(output_file):
bbf.remove_file(output_file)
LIST_DM.sort(reverse=False) LIST_DM.sort(reverse=False)
wb = Workbook(style_compression=2) wb = xlwt.Workbook(style_compression=2)
sheet = wb.add_sheet('CWMP-USP') sheet = wb.add_sheet('CWMP-USP')
xlwt.add_palette_colour("custom_colour_yellow", 0x10) xlwt.add_palette_colour("custom_colour_yellow", 0x10)
@ -162,7 +151,8 @@ def generate_excel_file():
wb.set_colour_RGB(0x20, 102, 205, 170) wb.set_colour_RGB(0x20, 102, 205, 170)
wb.set_colour_RGB(0x30, 153, 153, 153) wb.set_colour_RGB(0x30, 153, 153, 153)
style_title = xlwt.easyxf('pattern: pattern solid, fore_colour custom_colour_grey;''font: bold 1, color black;''alignment: horizontal center;') style_title = xlwt.easyxf(
'pattern: pattern solid, fore_colour custom_colour_grey;''font: bold 1, color black;''alignment: horizontal center;')
sheet.write(0, 0, 'OBJ/PARAM/OPERATE', style_title) sheet.write(0, 0, 'OBJ/PARAM/OPERATE', style_title)
sheet.write(0, 1, 'Protocols', style_title) sheet.write(0, 1, 'Protocols', style_title)
sheet.write(0, 2, 'Supported', style_title) sheet.write(0, 2, 'Supported', style_title)
@ -173,16 +163,20 @@ def generate_excel_file():
i += 1 i += 1
if param[3] == "object": if param[3] == "object":
style_name = xlwt.easyxf('pattern: pattern solid, fore_colour custom_colour_yellow') style_name = xlwt.easyxf(
style = xlwt.easyxf('pattern: pattern solid, fore_colour custom_colour_yellow;''alignment: horizontal center;') 'pattern: pattern solid, fore_colour custom_colour_yellow')
style = xlwt.easyxf(
'pattern: pattern solid, fore_colour custom_colour_yellow;''alignment: horizontal center;')
elif param[3] == "operate": elif param[3] == "operate":
style_name = xlwt.easyxf('pattern: pattern solid, fore_colour custom_colour_green') style_name = xlwt.easyxf(
style = xlwt.easyxf('pattern: pattern solid, fore_colour custom_colour_green;''alignment: horizontal center;') 'pattern: pattern solid, fore_colour custom_colour_green')
style = xlwt.easyxf(
'pattern: pattern solid, fore_colour custom_colour_green;''alignment: horizontal center;')
else: else:
style_name = None style_name = None
style = xlwt.easyxf('alignment: horizontal center;') style = xlwt.easyxf('alignment: horizontal center;')
if style_name != None: if style_name is not None:
sheet.write(i, 0, param[0], style_name) sheet.write(i, 0, param[0], style_name)
else: else:
sheet.write(i, 0, param[0]) sheet.write(i, 0, param[0])
@ -194,37 +188,79 @@ def generate_excel_file():
sheet.col(1).width = 175*20 sheet.col(1).width = 175*20
sheet.col(2).width = 175*20 sheet.col(2).width = 175*20
wb.save(EXCEL_FILE) wb.save(output_file)
def generate_excel(dm_name_list, output_file="datamodel.xml"):
print("Generating BBF Data Models in Excel format...")
bbf.fill_list_supported_dm()
parse_object_tree(dm_name_list)
generate_excel_file(output_file)
if os.path.isfile(output_file):
print("└── Excel file generated: %s" % output_file)
else:
print("└── Error in excel file generation %s" % output_file)
### main ### ### main ###
if len(sys.argv) < 2: if __name__ == '__main__':
print_dmexcel_usage() parser = argparse.ArgumentParser(
exit(1) description='Script to generate list of supported and non-supported parameter in xls format',
epilog='Part of BBF-tools, refer Readme for more examples'
)
try: parser.add_argument(
opts, args = getopt.getopt(sys.argv[2:], "hr:v:p:", ["remote-dm=", "vendor-list=", "vendor-prefix="]) '-d', '--datamodel',
except getopt.GetoptError: action = 'append',
print_dmexcel_usage() metavar='tr181',
exit(1) choices= ['tr181', 'tr104'],
required= True,
)
for opt, arg in opts: parser.add_argument(
if opt in ("-h", "--help"): '-r', '--remote-dm',
print_dmexcel_usage() action='append',
exit(1) metavar = 'https://dev.iopsys.eu/iopsys/stunc.git^devel',
elif opt in ("-r", "--remote-dm"): help= 'Includes OBJ/PARAM defined under remote repositories defined as bbf plugin'
BBF_REMOTE_DM = arg )
elif opt in ("-v", "--vendor-list"):
BBF_VENDOR_LIST = arg
elif opt in ("-p", "--vendor-prefix"):
bbf.BBF_VENDOR_PREFIX = arg
bbf.generate_supported_dm(BBF_REMOTE_DM, BBF_VENDOR_LIST) parser.add_argument(
'-v', '--vendor-list',
metavar='iopsys',
action = 'append',
help='Generate data model tree with vendor extension OBJ/PARAM'
)
parse_object_tree() parser.add_argument(
'-p', '--vendor-prefix',
default = 'iopsys',
metavar = 'X_IOPSYS_EU_',
help = 'Generate data model tree using provided vendor prefix for vendor defined objects'
)
generate_excel_file() parser.add_argument(
'-o', '--output',
default = "datamodel.xls",
metavar = "supported_datamodel.xls",
help = 'Generate the output file with given name'
)
if (os.path.isfile(EXCEL_FILE)): args = parser.parse_args()
print("Excel file generated: %s" % EXCEL_FILE) plugins = []
else:
print("No Excel file generated!") if isinstance(args.remote_dm, list) is True:
for f in args.remote_dm:
x = f.split('^')
r = {}
r["repo"] = x[0]
if len(x) == 2:
r["version"] = x[1]
plugins.append(r)
bbf.generate_supported_dm(args.vendor_prefix, args.vendor_list, plugins)
bbf.clean_supported_dm_list()
generate_excel(args.datamodel, args.output)
print("Datamodel generation completed, aritifacts available in %s" %args.output)

View file

@ -1,847 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2020 iopsys Software Solutions AB
# Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com>
import os, sys, time, re, json
import xml.etree.ElementTree as xml
from collections import OrderedDict
from shutil import copyfile
import bbf_common as bbf
listTypes = ["string",
"unsignedInt",
"unsignedLong",
"int",
"long",
"boolean",
"dateTime",
"hexBinary",
"base64"]
listdataTypes = ["string",
"unsignedInt",
"unsignedLong",
"int",
"long",
"boolean",
"dateTime",
"hexBinary",
"base64",
"IPAddress",
"IPv4Address",
"IPv6Address",
"IPPrefix",
"IPv4Prefix",
"IPv6Prefix",
"MACAddress",
"decimal",
"IoTDeviceType",
"IoTLevelType",
"IoTUnitType",
"IoTEnumSensorType",
"IoTEnumControlType"]
def getname( objname ):
global model_root_name
OBJSname = objname
if (objname.count('.') > 1 and (objname.count('.') != 2 or objname.count('{i}') != 1) ):
OBJSname = objname.replace(dmroot1.get('name'), "", 1)
OBJSname = OBJSname.replace("{i}", "")
OBJSname = OBJSname.replace(".", "")
if (objname.count('.') == 1):
model_root_name = OBJSname
OBJSname = "Root" + OBJSname
return OBJSname
if (objname.count('.') == 2 and objname.count('{i}') == 1):
model_root_name = OBJSname
OBJSname = "Services" + OBJSname
return OBJSname
return OBJSname
def getparamtype( dmparam ):
ptype = None
for s in dmparam:
if s.tag == "syntax":
for c in s:
if c.tag == "list":
ptype = "string"
break
if c.tag == "dataType":
reftype = c.get("ref")
if "StatsCounter" in reftype:
ptype = "unsignedInt"
break
ptype = "string"
break
ptype = c.tag
break
break
if ptype == None:
ptype = "__NA__"
return ptype
def getMinMaxEnumerationUnitPatternparam(paramtype, c):
paramvalrange = None
paramenum = None
paramunit = None
parampattern = None
if paramtype == "string" or paramtype == "hexBinary" or paramtype == "base64":
for cc in c:
if cc.tag == "size":
if paramvalrange == None:
paramvalrange = "%s,%s" % (cc.get("minLength"), cc.get("maxLength"))
else:
paramvalrange = "%s;%s,%s" % (paramvalrange, cc.get("minLength"), cc.get("maxLength"))
if cc.tag == "enumeration":
if paramenum == None:
paramenum = "\"%s\"" % cc.get('value')
else:
paramenum = "%s, \"%s\"" % (paramenum, cc.get('value'))
if cc.tag == "pattern":
if parampattern == None:
parampattern = "\"%s\"" % cc.get('value')
elif cc.get('value') != "":
parampattern = "%s,\"%s\"" % (parampattern, cc.get('value'))
elif paramtype == "unsignedInt" or paramtype == "int" or paramtype == "unsignedLong" or paramtype == "long":
for cc in c:
if cc.tag == "range":
if paramvalrange == None:
paramvalrange = "%s,%s" % (cc.get("minInclusive"), cc.get("maxInclusive"))
else:
paramvalrange = "%s;%s,%s" % (paramvalrange, cc.get("minInclusive"), cc.get("maxInclusive"))
if cc.tag == "units":
paramunit = cc.get("value")
return paramvalrange, paramenum, paramunit, parampattern
def getparamdatatyperef( datatyperef ):
paramvalrange = None
paramenum = None
paramunit = None
parampattern = None
for d in xmlroot1:
if d.tag == "dataType" and d.get("name") == datatyperef:
if d.get("base") != "" and d.get("base") != None and d.get("name") == "Alias":
paramvalrange, paramenum, paramunit, parampattern = getparamdatatyperef(d.get("base"))
else:
for dd in d:
if dd.tag in listTypes:
paramvalrange, paramenum, paramunit, parampattern = getMinMaxEnumerationUnitPatternparam(dd.tag, dd)
break
if dd.tag == "size":
if paramvalrange == None:
paramvalrange = "%s,%s" % (dd.get("minLength"), dd.get("maxLength"))
else:
paramvalrange = "%s;%s,%s" % (paramvalrange, dd.get("minLength"), dd.get("maxLength"))
if dd.tag == "enumeration":
if paramenum == None:
paramenum = "\"%s\"" % dd.get('value')
else:
paramenum = "%s, \"%s\"" % (paramenum, dd.get('value'))
if dd.tag == "pattern":
if parampattern == None:
parampattern = "\"%s\"" % dd.get('value')
elif dd.get('value') != "":
parampattern = "%s,\"%s\"" % (parampattern, dd.get('value'))
break
return paramvalrange, paramenum, paramunit, parampattern
def getparamlist( dmparam ):
minItem = None
maxItem = None
maxsize = None
minItem = dmparam.get("minItems")
maxItem = dmparam.get("maxItems")
for cc in dmparam:
if cc.tag == "size":
maxsize = cc.get("maxLength")
return minItem, maxItem, maxsize
def getparamoption( dmparam ):
datatype = None
paramvalrange = None
paramenum = None
paramunit = None
parampattern = None
listminItem = None
listmaxItem = None
listmaxsize = None
islist = 0
for s in dmparam:
if s.tag == "syntax":
for c in s:
if c.tag == "list":
islist = 1
listminItem, listmaxItem, listmaxsize = getparamlist(c)
for c in s:
datatype = c.tag if c.tag in listdataTypes else None
if datatype != None:
paramvalrange, paramenum, paramunit, parampattern = getMinMaxEnumerationUnitPatternparam(datatype, c)
break
if c.tag == "dataType":
datatype = c.get("ref")
paramvalrange, paramenum, paramunit, parampattern = getparamdatatyperef(c.get("ref"))
break
if islist == 0:
datatype = c.tag if c.tag in listdataTypes else None
if datatype != None:
paramvalrange, paramenum, paramunit, parampattern = getMinMaxEnumerationUnitPatternparam(datatype, c)
break
if c.tag == "dataType":
datatype = c.get("ref")
paramvalrange, paramenum, paramunit, parampattern = getparamdatatyperef(datatype)
break
break
return islist, datatype, paramvalrange, paramenum, paramunit, parampattern, listminItem, listmaxItem, listmaxsize
listmapping = []
def generatelistfromfile(dmobject):
obj = dmobject.get('name').split(".")
if "tr-104" in sys.argv[1]:
pathfilename = "../dmtree/tr104/" + obj[1].lower() + ".c"
pathiopsyswrtfilename = "../dmtree/tr104/" + obj[1].lower() + "-iopsyswrt.c"
else:
pathfilename = "../dmtree/tr181/" + obj[1].lower() + ".c"
pathiopsyswrtfilename = "../dmtree/tr181/" + obj[1].lower() + "-iopsyswrt.c"
for x in range(0, 2):
pathfile = pathfilename if x == 0 else pathiopsyswrtfilename
exists = os.path.isfile(pathfile)
if exists:
filec = open(pathfile, "r")
for linec in filec:
if "/*#" in linec:
listmapping.append(linec)
else:
pass
def getparammapping(dmobject, dmparam):
hasmapping = 0
mapping = ""
if "tr-104" in sys.argv[1]:
param = "Device.Services." + dmobject.get('name') + dmparam.get('name')
else:
param = dmobject.get('name') + dmparam.get('name')
for value in listmapping:
if param in value:
hasmapping = 1
config_type = value.split("!")
mapping = config_type[1]
mapping = mapping.replace("*/\n", "")
break
return hasmapping, mapping
def getobjmapping(dmobject):
hasmapping = 0
mapping = ""
if "tr-104" in sys.argv[1]:
obj = "Device.Services." + dmobject.get('name')
else:
obj = dmobject.get('name')
for value in listmapping:
config_type = value.split("!")
mapping = config_type[0]
mapping = mapping.replace("/*#", "")
if obj == mapping:
hasmapping = 1
mapping = config_type[1]
mapping = mapping.replace("*/\n", "")
break
return hasmapping, mapping
def objhaschild (parentname, level, check_obj):
hasobj = 0
model = model2 if check_obj == 0 else model1
for c in model:
objname = c.get('name')
if c.tag == "object" and parentname in objname and (objname.count('.') - objname.count('{i}')) == level:
hasobj = 1
break
return hasobj
def objhasparam (dmobject):
hasparam = 0
for c in dmobject:
if c.tag == "parameter":
hasparam = 1
break
return hasparam
def getuniquekeys (dmobject):
uniquekeys = None
for c in dmobject:
if c.tag == "uniqueKey":
for s in c:
if s.tag == "parameter":
if uniquekeys == None:
uniquekeys = "\"%s\"" % s.get('ref')
else:
uniquekeys = uniquekeys + "," + "\"%s\"" % s.get('ref')
return uniquekeys
def printopenobject (obj):
fp = open('./.json_tmp', 'a')
if "tr-104" in sys.argv[1] or "tr-135" in sys.argv[1]:
print("\"Device.Services.%s\" : {" % obj.get('name').replace(" ", ""), file=fp)
else:
print("\"%s\" : {" % obj.get('name').replace(" ", ""), file=fp)
fp.close()
def printopenfile ():
fp = open('./.json_tmp', 'a')
print("{", file=fp)
fp.close()
def printclosefile ():
fp = open('./.json_tmp', 'a')
print("}", file=fp)
fp.close()
def printOBJMaPPING (mapping):
fp = open('./.json_tmp', 'a')
config_type = mapping.split(":")
config = config_type[1].split("/")
print("\"mapping\": {", file=fp)
print("\"type\": \"%s\"," % config_type[0].lower(), file=fp)
print("\"%s\": {" % config_type[0].lower(), file=fp)
# UCI
if config_type[0] == "UCI":
print("\"file\": \"%s\"," % config[0], file=fp)
print("\"section\": {", file=fp)
print("\"type\": \"%s\"" % config[1], file=fp)
print("},", file=fp)
print("\"dmmapfile\": \"%s\"" % config[2], file=fp)
# UBUS
elif config_type[0] == "UBUS":
print("\"object\": \"%s\"," % config[0], file=fp)
print("\"method\": \"%s\"," % config[1], file=fp)
print("\"args\": {", file=fp)
if config[2] != "":
args = config[2].split(",")
print("\"%s\": \"%s\"" % (args[0], args[1]), file=fp)
print("}", file=fp)
print("\"key\": \"%s\"" % config[3], file=fp)
print("}\n}", file=fp)
fp.close()
def printPARAMMaPPING (mapping):
fp = open('./.json_tmp', 'a')
lst = mapping.split("&")
print("\"mapping\": [", file=fp)
for i in range(len(lst)):
config_type = lst[i].split(":")
config = config_type[1].split("/")
print("{", file=fp)
print("\"type\": \"%s\"," % config_type[0].lower(), file=fp)
# SYSFS || PROCFS
if config_type[0] == "SYSFS" or config_type[0] == "PROCFS":
print("\"file\": \"%s\"" % config_type[1], file=fp)
# UCI, UBUS, CLI
else:
# Only for UCI, UBUS, CLI
print("\"%s\": {" % config_type[0].lower(), file=fp)
# UCI
if config_type[0] == "UCI":
print("\"file\": \"%s\"," % config[0], file=fp)
print("\"section\": {", file=fp)
var = config[1].split(",")
if len(var) == 1:
print("\"type\": \"%s\"" % var[0], file=fp)
elif len(var) > 1 and "@i" in var[1]:
print("\"type\": \"%s\"," % var[0], file=fp)
print("\"index\": \"%s\"" % var[1], file=fp)
elif len(var) > 1:
print("\"type\": \"%s\"," % var[0], file=fp)
print("\"name\": \"%s\"" % var[1], file=fp)
print("}", file=fp)
if len(var) > 1:
print("\"option\": {", file=fp)
print("\"name\": \"%s\"" % config[2], file=fp)
print("}", file=fp)
# UBUS
elif config_type[0] == "UBUS":
print("\"object\": \"%s\"," % config[0], file=fp)
print("\"method\": \"%s\"," % config[1], file=fp)
print("\"args\": {", file=fp)
if config[2] != "":
args = config[2].split(",")
print("\"%s\": \"%s\"" % (args[0], args[1]), file=fp)
print("}", file=fp)
print("\"key\": \"%s\"" % config[3], file=fp)
# CLI
elif config_type[0] == "CLI":
print("\"command\": \"%s\"," % config[0], file=fp)
print("\"args\": \"%s\"" % config[1], file=fp)
print("}", file=fp)
print("}", file=fp)
print("]\n}", file=fp)
fp.close()
def removelastline ():
file = open("./.json_tmp")
lines = file.readlines()
lines = lines[:-1]
file.close()
w = open("./.json_tmp",'w')
w.writelines(lines)
w.close()
printclosefile ()
def replace_data_in_file( data_in, data_out ):
file_r = open("./.json_tmp", "rt")
file_w = open("./.json_tmp_1", "wt")
text = ''.join(file_r).replace(data_in, data_out)
file_w.write(text)
file_r.close()
file_w.close()
copyfile("./.json_tmp_1", "./.json_tmp")
bbf.remove_file("./.json_tmp_1")
def updatejsontmpfile ():
replace_data_in_file ("}\n", "},\n")
replace_data_in_file ("},\n},", "}\n},")
replace_data_in_file ("}\n},\n},", "}\n}\n},")
replace_data_in_file ("}\n},\n}\n},", "}\n}\n}\n},")
replace_data_in_file ("}\n},\n}\n}\n},", "}\n}\n}\n}\n},")
replace_data_in_file ("}\n}\n}\n},\n}\n},", "}\n}\n}\n}\n}\n},")
replace_data_in_file ("}\n}\n}\n}\n}\n}\n},", "}\n}\n}\n}\n}\n}\n},")
replace_data_in_file ("}\n}\n}\n},\n}\n}\n}\n},", "}\n}\n}\n}\n}\n}\n}\n},")
replace_data_in_file ("},\n]", "}\n]")
def removetmpfiles():
bbf.remove_file("./.json_tmp")
bbf.remove_file("./.json_tmp_1")
def printOBJ( dmobject, hasobj, hasparam, bbfdm_type ):
uniquekeys = getuniquekeys(dmobject)
hasmapping, mapping = getobjmapping(dmobject)
if (dmobject.get('name')).endswith(".{i}."):
fbrowse = "true"
else:
fbrowse = "false"
fp = open('./.json_tmp', 'a')
print("\"type\" : \"object\",", file=fp)
print("\"protocols\" : [%s]," % bbfdm_type, file=fp)
if uniquekeys != None:
print("\"uniqueKeys\" : [%s]," % uniquekeys, file=fp)
if (dmobject.get('access') == "readOnly"):
print("\"access\" : false,", file=fp)
else:
print("\"access\" : true,", file=fp)
if hasparam or hasobj:
print("\"array\" : %s," % fbrowse, file=fp)
else:
print("\"array\" : %s" % fbrowse, file=fp)
fp.close()
if hasmapping:
printOBJMaPPING (mapping)
def printPARAM( dmparam, dmobject, bbfdm_type ):
hasmapping, mapping = getparammapping(dmobject, dmparam)
islist, datatype, paramvalrange, paramenum, paramunit, parampattern, listminItem, listmaxItem, listmaxsize = getparamoption(dmparam)
fp = open('./.json_tmp', 'a')
print("\"%s\" : {" % dmparam.get('name').replace(" ", ""), file=fp)
print("\"type\" : \"%s\"," % getparamtype(dmparam), file=fp)
print("\"read\" : true,", file=fp)
print("\"write\" : %s," % ("false" if dmparam.get('access') == "readOnly" else "true"), file=fp)
print("\"protocols\" : [%s]," % bbfdm_type, file=fp)
# create list
if islist == 1:
print("\"list\" : {", file=fp)
# add datatype
print(("\"datatype\" : \"%s\"," % datatype) if (listmaxsize != None or listminItem != None or listmaxItem != None or paramvalrange != None or paramunit != None or paramenum != None or parampattern != None or (hasmapping and islist == 0)) else ("\"datatype\" : \"%s\"" % datatype), file=fp)
if islist == 1:
# add maximum size of list
if listmaxsize != None:
print(("\"maxsize\" : %s," % listmaxsize) if (listminItem != None or listmaxItem != None or paramvalrange != None or paramunit != None or paramenum != None or parampattern != None) else ("\"maxsize\" : %s" % listmaxsize), file=fp)
# add minimun and maximum item values
if listminItem != None and listmaxItem != None:
print("\"item\" : {", file=fp)
print("\"min\" : %s," % listminItem, file=fp)
print("\"max\" : %s" % listmaxItem, file=fp)
print(("},") if (paramvalrange != None or paramunit != None or paramenum != None or parampattern != None) else ("}"), file=fp)
elif listminItem != None and listmaxItem == None:
print("\"item\" : {", file=fp)
print("\"min\" : %s" % listminItem, file=fp)
print(("},") if (paramvalrange != None or paramunit != None or paramenum != None or parampattern != None) else ("}"), file=fp)
elif listminItem == None and listmaxItem != None:
print("\"item\" : {", file=fp)
print("\"max\" : %s" % listmaxItem, file=fp)
print(("},") if (paramvalrange != None or paramunit != None or paramenum != None or parampattern != None) else ("}"), file=fp)
# add minimun and maximum values
if paramvalrange != None:
valranges = paramvalrange.split(";")
print("\"range\" : [", file=fp)
for eachvalrange in valranges:
valrange = eachvalrange.split(",")
if valrange[0] != "None" and valrange[1] != "None":
print("{", file=fp)
print("\"min\" : %s," % valrange[0], file=fp)
print("\"max\" : %s" % valrange[1], file=fp)
print(("},") if (eachvalrange == valranges[len(valranges)-1]) else ("}"), file=fp)
elif valrange[0] != "None" and valrange[1] == "None":
print("{", file=fp)
print("\"min\" : %s" % valrange[0], file=fp)
print(("},") if (eachvalrange == valranges[len(valranges)-1]) else ("}"), file=fp)
elif valrange[0] == "None" and valrange[1] != "None":
print("{", file=fp)
print("\"max\" : %s" % valrange[1], file=fp)
print(("},") if (eachvalrange == valranges[len(valranges)-1]) else ("}"), file=fp)
print(("],") if (paramunit != None or paramenum != None or parampattern != None or (hasmapping and islist == 0)) else ("]"), file=fp)
# add unit
if paramunit != None:
print(("\"unit\" : \"%s\"," % paramunit) if (paramenum != None or parampattern != None or (hasmapping and islist == 0)) else ("\"unit\" : \"%s\"" % paramunit), file=fp)
# add enumaration
if paramenum != None:
print(("\"enumerations\" : [%s]," % paramenum) if (parampattern != None or (hasmapping and islist == 0)) else ("\"enumerations\" : [%s]" % paramenum), file=fp)
# add pattern
if parampattern != None:
print(("\"pattern\" : [%s]," % parampattern.replace("\\", "\\\\")) if (hasmapping and islist == 0) else ("\"pattern\" : [%s]" % parampattern.replace("\\", "\\\\")), file=fp)
# close list
if islist == 1:
print(("},") if hasmapping else ("}"), file=fp)
# add mapping
if hasmapping:
fp.close()
printPARAMMaPPING(mapping)
else:
print("}", file=fp)
fp.close()
def printCOMMAND( dmparam, dmobject, bbfdm_type ):
fp = open('./.json_tmp', 'a')
print("\"%s\" : {" % dmparam.get('name'), file=fp)
print("\"type\" : \"command\",", file=fp)
inputfound = 0
outputfound = 0
for c in dmparam:
if c.tag == "input":
inputfound = 1
elif c.tag == "output":
outputfound = 1
print(("\"protocols\" : [\"usp\"],") if (inputfound or outputfound) else ("\"protocols\" : [\"usp\"]"), file=fp)
for c in dmparam:
if c.tag == "input":
print("\"input\" : {", file=fp)
for param in c:
if param.tag == "parameter":
fp.close()
printPARAM(param, dmobject, "\"usp\"")
fp = open('./.json_tmp', 'a')
print("}" if outputfound else "},", file=fp)
if c.tag == "output":
print("\"output\" : {", file=fp)
for param in c:
if param.tag == "parameter":
fp.close()
printPARAM(param, dmobject, "\"usp\"")
fp = open('./.json_tmp', 'a')
print("}", file=fp)
print("}", file=fp)
fp.close()
def printusage():
print("Usage: " + sys.argv[0] + " <tr-xxx cwmp xml data model> <tr-xxx usp xml data model> [Object path]")
print("Examples:")
print(" - " + sys.argv[0] + " tr-181-2-14-1-cwmp-full.xml tr-181-2-14-1-usp-full.xml Device.")
print(" ==> Generate the json file of the sub tree Device. in tr181.json")
print(" - " + sys.argv[0] + " tr-104-2-0-2-cwmp-full.xml tr-104-2-0-2-usp-full.xml Device.Services.VoiceService.")
print(" ==> Generate the json file of the sub tree Device.Services.VoiceService. in tr104.json")
print(" - " + sys.argv[0] + " tr-106-1-2-0-full.xml Device.")
print(" ==> Generate the json file of the sub tree Device. in tr106.json")
print("")
print("Example of xml data model file: https://www.broadband-forum.org/cwmp/tr-181-2-14-1-cwmp-full.xml")
exit(1)
def getobjectpointer( objname ):
obj = None
for c in model1:
if c.tag == "object" and (c.get('name') == objname or c.get('name') == (objname + "{i}.")):
obj = c
break
return obj
def chech_each_obj_with_other_obj(model1, model2):
for c in model2:
if c.tag == "object":
found = 0
for obj in model1:
if obj.tag == "object" and (obj.get('name') == c.get('name')):
found = 1
break
if found == 0:
if c.get('name').count(".") - (c.get('name')).count("{i}.") != 2:
continue
dmlevel = (c.get('name')).count(".") - (c.get('name')).count("{i}.") + 1
printopenobject(c)
object_parse_childs(c, dmlevel, 0, 0)
printclosefile ()
def check_if_obj_exist_in_other_xml_file( objname ):
obj = None
found = 0
for c in model2:
if c.tag == "object" and (c.get('name') == objname.get('name')):
obj = c
found = 1
break
return obj, found
def chech_current_param_exist_in_other_obj(obj, c):
bbfdm_type = ""
for param in obj:
if param.tag == "parameter" and param.get('name') == c.get('name'):
bbfdm_type = "\"cwmp\", \"usp\""
break
if bbfdm_type == "" and "cwmp" in sys.argv[1]:
bbfdm_type = "\"cwmp\""
elif bbfdm_type == "" and "usp" in sys.argv[1]:
bbfdm_type = "\"usp\""
return bbfdm_type
def chech_obj_with_other_obj(obj, dmobject):
for c in obj:
exist = 0
if c.tag == "parameter":
for param in dmobject:
if param.tag == "parameter" and c.get('name') == param.get('name'):
exist = 1
break
if exist == 0 and "cwmp" in sys.argv[1]:
printPARAM(c, obj, "\"usp\"")
elif exist == 0 and "usp" in sys.argv[1]:
printPARAM(c, obj, "\"cwmp\"")
if c.tag == "command":
printCOMMAND(c, obj, "\"usp\"")
def object_parse_childs(dmobject, level, generatelist, check_obj):
if generatelist == 0 and (dmobject.get('name')).count(".") == 2:
generatelistfromfile(dmobject)
if check_obj == 1 and ("tr-181" in sys.argv[1] or "tr-104" in sys.argv[1]):
obj, exist = check_if_obj_exist_in_other_xml_file(dmobject)
hasobj = objhaschild(dmobject.get('name'), level, check_obj)
hasparam = objhasparam(dmobject)
if check_obj == 1 and "tr-181" in sys.argv[1] and exist == 0:
printOBJ(dmobject, hasobj, hasparam, "\"cwmp\"")
elif check_obj == 0 and "tr-181" in sys.argv[1]:
printOBJ(dmobject, hasobj, hasparam, "\"usp\"")
else:
printOBJ(dmobject, hasobj, hasparam, "\"cwmp\", \"usp\"")
if hasparam:
for c in dmobject:
if c.tag == "parameter":
if check_obj == 1 and "tr-181" in sys.argv[1] and exist == 1:
bbfdm_type = chech_current_param_exist_in_other_obj(obj, c)
elif check_obj == 1 and "tr-181" in sys.argv[1] and exist == 0:
bbfdm_type = "\"cwmp\""
elif check_obj == 0:
bbfdm_type = "\"usp\""
else:
bbfdm_type = "\"cwmp\", \"usp\""
printPARAM(c, dmobject, bbfdm_type)
if c.tag == "command":
printCOMMAND(c, dmobject, "\"usp\"")
if check_obj == 1 and "tr-181" in sys.argv[1] and exist == 1:
chech_obj_with_other_obj(obj, dmobject)
if hasobj and check_obj:
for c in model1:
objname = c.get('name')
if c.tag == "object" and dmobject.get('name') in objname and (objname.count('.') - objname.count('{i}')) == level:
printopenobject(c)
object_parse_childs(c, level+1, 0, 1)
printclosefile ()
if hasobj and check_obj == 0:
for c in model2:
objname = c.get('name')
if c.tag == "object" and dmobject.get('name') in objname and (objname.count('.') - objname.count('{i}')) == level:
printopenobject(c)
object_parse_childs(c, level+1, 0, 0)
printclosefile ()
return
def generatejsonfromobj(pobj, pdir):
generatelist = 0
bbf.create_folder(pdir)
removetmpfiles()
dmlevel = (pobj.get('name')).count(".") - (pobj.get('name')).count("{i}.") + 1
if (pobj.get('name')).count(".") == 1:
generatelist = 0
else:
generatelistfromfile(pobj)
generatelist = 1
printopenfile ()
printopenobject(pobj)
object_parse_childs(pobj, dmlevel, generatelist, 1)
if "tr-181" in sys.argv[1] and Root.count(".") == 1:
chech_each_obj_with_other_obj(model1, model2)
if "tr-181" in sys.argv[1] and pobj.get("name").count(".") == 1:
dmfp = open(pdir + "/tr181.json", "a")
elif "tr-104" in sys.argv[1] and pobj.get("name").count(".") == 2:
dmfp = open(pdir + "/tr104.json", "a")
elif "tr-135" in sys.argv[1] and pobj.get("name").count(".") == 2:
dmfp = open(pdir + "/tr135.json", "a")
elif "tr-106" in sys.argv[1] and pobj.get("name").count(".") == 1:
dmfp = open(pdir + "/tr106.json", "a")
else:
dmfp = open(pdir + "/" + (getname(pobj.get('name'))).lower() + ".json", "a")
printclosefile ()
printclosefile ()
updatejsontmpfile ()
removelastline ()
f = open("./.json_tmp", "r")
obj = json.load(f, object_pairs_hook=OrderedDict)
dump = json.dumps(obj, indent=4)
tabs = re.sub('\n +', lambda match: '\n' + '\t' * int(len(match.group().strip('\n')) / 4), dump)
try:
print("%s" % tabs, file=dmfp)
dmfp.close()
except:
pass
removetmpfiles()
return dmfp.name
### main ###
if len(sys.argv) < 4:
printusage()
if (sys.argv[1]).lower() == "-h" or (sys.argv[1]).lower() == "--help":
printusage()
is_service_model = 0
model_root_name = "Root"
tree1 = xml.parse(sys.argv[1])
xmlroot1 = tree1.getroot()
model1 = xmlroot1
for child in model1:
if child.tag == "model":
model1 = child
if model1.tag != "model":
print("Wrong %s XML Data model format!" % sys.argv[1])
exit(1)
dmroot1 = None
for c in model1:
if c.tag == "object" and c.get("name").count(".") == 1:
dmroot1 = c
break
#If it is service data model
if dmroot1 == None:
is_service_model = 1
for c in model1:
if c.tag == "object" and c.get("name").count(".") == 2:
dmroot1 = c
break
if dmroot1 == None:
print("Wrong %s XML Data model format!" % sys.argv[1])
exit(1)
if "tr-181" in sys.argv[1] or "tr-104" in sys.argv[1]:
tree2 = xml.parse(sys.argv[2])
xmlroot2 = tree2.getroot()
model2 = xmlroot2
for child in model2:
if child.tag == "model":
model2 = child
if model2.tag != "model":
print("Wrong %s XML Data model format!" % sys.argv[2])
exit(1)
dmroot2 = None
for c in model2:
if c.tag == "object" and c.get("name").count(".") == 1:
dmroot2 = c
break
#If it is service data model
if dmroot2 == None:
for c in model2:
if c.tag == "object" and c.get("name").count(".") == 2:
dmroot2 = c
break
if dmroot2 == None:
print("Wrong %s XML Data model format!" % sys.argv[2])
exit(1)
Root = sys.argv[3]
if "tr-181" in sys.argv[1]:
gendir = "tr181_" + time.strftime("%Y-%m-%d_%H-%M-%S")
elif "tr-104" in sys.argv[1]:
gendir = "tr104_" + time.strftime("%Y-%m-%d_%H-%M-%S")
Root = (sys.argv[3])[len("Device.Services."):]
elif "tr-135" in sys.argv[1]:
gendir = "tr135_" + time.strftime("%Y-%m-%d_%H-%M-%S")
Root = (sys.argv[3])[len("Device.Services."):]
elif "tr-106" in sys.argv[1]:
gendir = "tr106_" + time.strftime("%Y-%m-%d_%H-%M-%S")
else:
gendir = "source_" + time.strftime("%Y-%m-%d_%H-%M-%S")
objstart = getobjectpointer(Root)
if objstart == None:
print("Wrong Object Name! %s" % Root)
exit(1)
filename = generatejsonfromobj(objstart, gendir)
print(filename)

View file

@ -4,15 +4,11 @@
# Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com> # Author: Amin Ben Ramdhane <amin.benramdhane@pivasoftware.com>
import os import os
import sys import argparse
import getopt
import bbf_common as bbf
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
import xml.dom.minidom as MD import xml.dom.minidom as MD
import numpy as np import bbf_common as bbf
BBF_REMOTE_DM = None
BBF_VENDOR_LIST = None
DM_OBJ_COUNT = 0 DM_OBJ_COUNT = 0
DM_PARAM_COUNT = 0 DM_PARAM_COUNT = 0
DEVICE_PROTOCOL = "DEVICE_PROTOCOL_DSLFTR069v1" DEVICE_PROTOCOL = "DEVICE_PROTOCOL_DSLFTR069v1"
@ -21,60 +17,29 @@ MANUFACTURER_OUI = "002207"
PRODUCT_CLASS = "DG400PRIME" PRODUCT_CLASS = "DG400PRIME"
MODEL_NAME = "DG400PRIME-A" MODEL_NAME = "DG400PRIME-A"
SOFTWARE_VERSION = "1.2.3.4" SOFTWARE_VERSION = "1.2.3.4"
XML_FORMAT = "BBF"
XML_FILE = "datamodel.xml"
ARRAY_TYPES = { "DMT_STRING" : "string", ARRAY_TYPES = {"DMT_STRING": "string",
"DMT_UNINT" : "unsignedInt", "DMT_UNINT": "unsignedInt",
"DMT_UNLONG" : "unsignedLong", "DMT_UNLONG": "unsignedLong",
"DMT_INT" : "int", "DMT_INT": "int",
"DMT_LONG" : "long", "DMT_LONG": "long",
"DMT_BOOL" : "boolean", "DMT_BOOL": "boolean",
"DMT_TIME" : "dateTime", "DMT_TIME": "dateTime",
"DMT_HEXBIN" : "hexBinary", "DMT_HEXBIN": "hexBinary",
"DMT_BASE64" : "base64"} "DMT_BASE64": "base64"}
def print_dmxml_usage():
print("Usage: " + sys.argv[0] + " [options...] <urls>")
print("Options: ")
print(" -r, --remote-dm Check OBJ/PARAM under these repositories if it is not found under bbf repo")
print(" -v, --vendor-list Generate data model tree with vendor extension OBJ/PARAM")
print(" -p, --vendor-prefix Generate data model tree using this vendor prefix. Default vendor prefix: %s" % bbf.BBF_VENDOR_PREFIX)
print(" -f, --format Generate data model tree with HDM format. Default format: %s" % XML_FORMAT)
print(" -d, --device-protocol Generate data model tree using this device protocol. Default device protocol: %s" % DEVICE_PROTOCOL)
print(" -m, --manufacturer Generate data model tree using this manufacturer. Default manufacturer: %s" % MANUFACTURER)
print(" -o, --manufacturer-oui Generate data model tree using this manufacturer oui. Default manufacturer oui: %s" % MANUFACTURER_OUI)
print(" -c, --product-class Generate data model tree using this product class. Default product class: %s" % PRODUCT_CLASS)
print(" -n, --model-name Generate data model tree using this model name. Default model name: %s" % MODEL_NAME)
print(" -s, --software-version Generate data model tree using this software version. Default software version: %s" % SOFTWARE_VERSION)
print(" -h, --help This help text") def pretty_format(elem):
print("Urls: ")
print(" url^(branch,hash,tag) The url with branch, hash or tag to be used")
print("")
print("Examples: ")
print(" - python " + sys.argv[0])
print(" ==> Generate xml file in %s" % XML_FILE)
print(" - python " + sys.argv[0] + " -f HDM")
print(" ==> Generate xml file with HDM format in %s" % XML_FILE)
print(" - python " + sys.argv[0] + " -v iopsys")
print(" ==> Generate xml file using iopsys extension in %s" % XML_FILE)
print(" - python " + sys.argv[0] + " -r https://dev.iopsys.eu/feed/iopsys.git^devel,https://dev.iopsys.eu/iopsys/mydatamodel.git^5c8e7cb740dc5e425adf53ea574fb529d2823f88")
print(" ==> Generate xml file in %s" % XML_FILE)
print(" - python " + sys.argv[0] + " -v iopsys,openwrt,test -r https://dev.iopsys.eu/feed/iopsys.git^6.0.0ALPHA1 -p X_TEST_COM_")
print(" ==> Generate xml file in %s" % XML_FILE)
def pretty_format( elem ):
elem_string = ET.tostring(elem, 'UTF-8') elem_string = ET.tostring(elem, 'UTF-8')
reparsed = MD.parseString(elem_string) reparsed = MD.parseString(elem_string)
return reparsed.toprettyxml(indent=" ") return reparsed.toprettyxml(indent=" ")
def generate_bbf_xml_file():
def generate_bbf_xml_file(output_file):
global DM_OBJ_COUNT global DM_OBJ_COUNT
global DM_PARAM_COUNT global DM_PARAM_COUNT
bbf.remove_file(XML_FILE) bbf.remove_file(output_file)
root = ET.Element("dm:document") root = ET.Element("dm:document")
root.set("xmlns:dm", "urn:broadband-forum-org:cwmp:datamodel-1-8") root.set("xmlns:dm", "urn:broadband-forum-org:cwmp:datamodel-1-8")
root.set("xmlns:dmr", "urn:broadband-forum-org:cwmp:datamodel-report-0-1") root.set("xmlns:dmr", "urn:broadband-forum-org:cwmp:datamodel-report-0-1")
@ -88,11 +53,11 @@ def generate_bbf_xml_file():
for value in bbf.LIST_SUPPORTED_DM: for value in bbf.LIST_SUPPORTED_DM:
obj = value.split(",") obj = value.strip().split(",")
access = "readOnly" if obj[1] == "DMREAD" else "readWrite" access = "readOnly" if obj[1] == "DMREAD" else "readWrite"
if obj[2] == "DMT_OBJ": if obj[2] == "DMT_OBJ":
## Object # Object
objec = ET.SubElement(model, "object") objec = ET.SubElement(model, "object")
objec.set("name", obj[0]) objec.set("name", obj[0])
objec.set("access", access) objec.set("access", access)
@ -100,25 +65,27 @@ def generate_bbf_xml_file():
objec.set("maxEntries", "20") objec.set("maxEntries", "20")
DM_OBJ_COUNT += 1 DM_OBJ_COUNT += 1
else: else:
## Parameter # Parameter
parameter = ET.SubElement(objec, "parameter") parameter = ET.SubElement(objec, "parameter")
parameter.set("name", obj[0][obj[0].rindex('.')+1:]) parameter.set("name", obj[0][obj[0].rindex('.')+1:])
parameter.set("access", access) parameter.set("access", access)
description = ET.SubElement(parameter, "description") description = ET.SubElement(parameter, "description")
description.text = str("parameter " + obj[0][obj[0].rindex('.')+1:]) description.text = str(
"parameter " + obj[0][obj[0].rindex('.')+1:])
syntax = ET.SubElement(parameter, "syntax") syntax = ET.SubElement(parameter, "syntax")
ET.SubElement(syntax, ARRAY_TYPES.get(obj[2], None)) ET.SubElement(syntax, ARRAY_TYPES.get(obj[2], None))
DM_PARAM_COUNT += 1 DM_PARAM_COUNT += 1
xml_file = open(XML_FILE, "w") xml_file = open(output_file, "w")
xml_file.write(pretty_format(root)) xml_file.write(pretty_format(root))
xml_file.close() xml_file.close()
def generate_hdm_xml_file():
def generate_hdm_xml_file(output_file):
global DM_OBJ_COUNT global DM_OBJ_COUNT
global DM_PARAM_COUNT global DM_PARAM_COUNT
bbf.remove_file(XML_FILE) bbf.remove_file(output_file)
root = ET.Element("deviceType") root = ET.Element("deviceType")
root.set("xmlns", "urn:dslforum-org:hdm-0-0") root.set("xmlns", "urn:dslforum-org:hdm-0-0")
root.set("xmlns:xsi", "http://www.w3.org/2001/XMLSchema-instance") root.set("xmlns:xsi", "http://www.w3.org/2001/XMLSchema-instance")
@ -136,8 +103,8 @@ def generate_hdm_xml_file():
modelName.text = str(MODEL_NAME) modelName.text = str(MODEL_NAME)
softwareVersion = ET.SubElement(root, "softwareVersion") softwareVersion = ET.SubElement(root, "softwareVersion")
softwareVersion.text = str(SOFTWARE_VERSION) softwareVersion.text = str(SOFTWARE_VERSION)
type = ET.SubElement(root, "type") dm_type = ET.SubElement(root, "type")
type.text = str("Device:2") dm_type.text = str("Device:2")
dataModel = ET.SubElement(root, "dataModel") dataModel = ET.SubElement(root, "dataModel")
attributes = ET.SubElement(dataModel, "attributes") attributes = ET.SubElement(dataModel, "attributes")
@ -173,81 +140,165 @@ def generate_hdm_xml_file():
attributeLength = ET.SubElement(attribute_visibility, "attributeLength") attributeLength = ET.SubElement(attribute_visibility, "attributeLength")
attributeLength.text = str("64") attributeLength.text = str("64")
param_array = np.empty(15, dtype=ET.Element) #param_array = np.empty(15, dtype=ET.Element)
param_array = [ET.Element] * 15
param_array[0] = parameters param_array[0] = parameters
for value in bbf.LIST_SUPPORTED_DM: for value in bbf.LIST_SUPPORTED_DM:
obj = value.split(",") obj = value.strip().split(",")
if obj[2] == "DMT_OBJ": if obj[2] == "DMT_OBJ":
## Object # Object
obj_tag = ET.SubElement(param_array[obj[0].replace(".{i}", "").count('.')-1], "parameter") obj_tag = ET.SubElement(
param_array[obj[0].replace(".{i}", "").count('.')-1], "parameter")
obj_name = ET.SubElement(obj_tag, "parameterName") obj_name = ET.SubElement(obj_tag, "parameterName")
obj_name.text = str(obj[0].replace(".{i}", "").split('.')[-2]) obj_name.text = str(obj[0].replace(".{i}", "").split('.')[-2])
obj_type = ET.SubElement(obj_tag, "parameterType") obj_type = ET.SubElement(obj_tag, "parameterType")
obj_type.text = str("object") obj_type.text = str("object")
obj_array = ET.SubElement(obj_tag, "array") obj_array = ET.SubElement(obj_tag, "array")
obj_array.text = str("true" if obj[0].endswith(".{i}.") else "false") obj_array.text = str(
"true" if obj[0].endswith(".{i}.") else "false")
parameters = ET.SubElement(obj_tag, "parameters") parameters = ET.SubElement(obj_tag, "parameters")
param_array[obj[0].replace(".{i}", "").count('.')] = parameters param_array[obj[0].replace(".{i}", "").count('.')] = parameters
DM_OBJ_COUNT += 1 DM_OBJ_COUNT += 1
else: else:
## Parameter # Parameter
param_tag = ET.SubElement(param_array[obj[0].replace(".{i}", "").count('.')], "parameter") param_tag = ET.SubElement(
param_array[obj[0].replace(".{i}", "").count('.')], "parameter")
param_name = ET.SubElement(param_tag, "parameterName") param_name = ET.SubElement(param_tag, "parameterName")
param_name.text = str(obj[0][obj[0].rindex('.')+1:]) param_name.text = str(obj[0][obj[0].rindex('.')+1:])
param_type = ET.SubElement(param_tag, "parameterType") param_type = ET.SubElement(param_tag, "parameterType")
param_type.text = str(ARRAY_TYPES.get(obj[2], None)) param_type.text = str(ARRAY_TYPES.get(obj[2], None))
DM_PARAM_COUNT += 1 DM_PARAM_COUNT += 1
xml_file = open(XML_FILE, "w") xml_file = open(output_file, "w")
xml_file.write(pretty_format(root)) xml_file.write(pretty_format(root))
xml_file.close() xml_file.close()
try: def generate_xml(acs = 'default', output_file="datamodel.xml"):
opts, args = getopt.getopt(sys.argv[1:], "hr:v:p:d:m:o:c:n:s:f:", ["remote-dm=", "vendor-list=", "vendor-prefix=", "device-protocol=", "manufacturer=", "manufacturer-oui=", "product-class=", "model-name=", "software-version=", "format="]) print("Generating BBF Data Models in xml format for %s acs..." % acs)
except getopt.GetoptError: bbf.fill_list_supported_dm()
print_dmxml_usage()
exit(1)
for opt, arg in opts: if acs == "HDM":
if opt in ("-h", "--help"): generate_hdm_xml_file(output_file)
print_dmxml_usage() else:
exit(1) generate_bbf_xml_file(output_file)
elif opt in ("-r", "--remote-dm"):
BBF_REMOTE_DM = arg
elif opt in ("-v", "--vendor-list"):
BBF_VENDOR_LIST = arg
elif opt in ("-p", "--vendor-prefix"):
bbf.BBF_VENDOR_PREFIX = arg
elif opt in ("-d", "--device-protocol"):
DEVICE_PROTOCOL = arg
elif opt in ("-m", "--manufacturer"):
MANUFACTURER = arg
elif opt in ("-o", "--manufacturer-oui"):
MANUFACTURER_OUI = arg
elif opt in ("-c", "--product-class"):
PRODUCT_CLASS = arg
elif opt in ("-n", "--model-name"):
MODEL_NAME = arg
elif opt in ("-s", "--software-version"):
SOFTWARE_VERSION = arg
elif opt in ("-f", "--format"):
XML_FORMAT = arg
bbf.generate_supported_dm(BBF_REMOTE_DM, BBF_VENDOR_LIST) if os.path.isfile(output_file):
print("├── XML file generated: %s" % output_file)
else:
print("├── Error in generating xml file")
if XML_FORMAT == "HDM": print("├── Number of BBF Data Models objects is %d" % DM_OBJ_COUNT)
generate_hdm_xml_file() print("└── Number of BBF Data Models parameters is %d" % DM_PARAM_COUNT)
else:
generate_bbf_xml_file()
print("Number of BBF Data Models objects is %d" % DM_OBJ_COUNT) ### main ###
print("Number of BBF Data Models parameters is %d" % DM_PARAM_COUNT) if __name__ == '__main__':
print("End of BBF Data Models Generation") parser = argparse.ArgumentParser(
description='Script to generate list of supported and non-supported parameter in xml format',
epilog='Part of BBF-tools, refer Readme for more examples'
)
if (os.path.isfile(XML_FILE)): parser.add_argument(
print("XML file generated: %s" % XML_FILE) '-r', '--remote-dm',
else: action='append',
print("No XML file generated!") metavar = 'https://dev.iopsys.eu/iopsys/stunc.git^devel',
help= 'Includes OBJ/PARAM defined under remote repositories defined as bbf plugin'
)
parser.add_argument(
'-v', '--vendor-list',
metavar='iopsys',
action = 'append',
help='Generate data model tree with vendor extension OBJ/PARAM.'
)
parser.add_argument(
'-p', '--vendor-prefix',
default = 'iopsys',
metavar = 'X_IOPSYS_EU_',
help = 'Generate data model tree using provided vendor prefix for vendor defined objects.'
)
parser.add_argument(
'-d', '--device-protocol',
default = 'DEVICE_PROTOCOL_DSLFTR069v1',
metavar = 'DEVICE_PROTOCOL_DSLFTR069v1',
help = 'Generate data model tree using this device protocol.'
)
parser.add_argument(
"-m", "--manufacturer",
default = 'iopsys',
metavar = 'iopsys',
help = 'Generate data model tree using this manufacturer.'
)
parser.add_argument(
"-u", "--manufacturer-oui",
default = '002207',
metavar = '002207',
help = 'Generate data model tree using this manufacturer oui.'
)
parser.add_argument(
"-c", "--product-class",
default = 'DG400PRIME',
metavar = 'DG400PRIME',
help = 'Generate data model tree using this product class.'
)
parser.add_argument(
"-n", "--model-name",
default = 'DG400PRIME-A',
metavar = 'DG400PRIME-A',
help = 'Generate data model tree using this model name.'
)
parser.add_argument(
"-s", "--software-version",
default = '1.2.3.4',
metavar = '1.2.3.4',
help = 'Generate data model tree using this software version.'
)
parser.add_argument(
"-f", "--format",
metavar = 'BBF',
default = 'BBF',
choices=['HDM', 'BBF', 'default'],
help = 'Generate data model tree with HDM format.'
)
parser.add_argument(
'-o', '--output',
default = "datamodel.xml",
metavar = "datamodel.xml",
help = 'Generate the output file with given name'
)
args = parser.parse_args()
MANUFACTURER = args.manufacturer
DEVICE_PROTOCOL = args.device_protocol
MANUFACTURER_OUI = args.manufacturer_oui
PRODUCT_CLASS = args.product_class
MODEL_NAME = args.model_name
SOFTWARE_VERSION = args.software_version
plugins = []
if isinstance(args.remote_dm, list):
for f in args.remote_dm:
x = f.split('^')
r = {}
r["repo"] = x[0]
if len(x) == 2:
r["version"] = x[1]
plugins.append(r)
bbf.generate_supported_dm(args.vendor_prefix, args.vendor_list, plugins)
bbf.clean_supported_dm_list()
generate_xml(args.format, args.output)
print("Datamodel generation completed, aritifacts available in %s" %args.output)

64
tools/tools_input.json Normal file
View file

@ -0,0 +1,64 @@
{
"manufacturer": "iopsys",
"protocol": "DEVICE_PROTOCOL_DSLFTR069v1",
"manufacturer_oui": "002207",
"product_class": "DG400PRIME",
"model_name": "DG400PRIME-A",
"software_version": "1.2.3.4",
"vendor_list": [
"iopsys",
"openwrt"
],
"vendor_prefix": "X_IOPSYS_EU_",
"plugins": [
{
"repo": "https://dev.iopsys.eu/iopsys/bulkdata.git",
"dm_files": [
"datamodel.c"
]
},
{
"repo": "https://dev.iopsys.eu/iopsys/xmppc.git"
},
{
"repo": "https://dev.iopsys.eu/iopsys/stunc.git",
"version": "devel",
"dm_files": [
"datamodel.c"
]
},
{
"repo": "https://dev.iopsys.eu/iopsys/udpechoserver.git",
"version": "master",
"dm_files": [
"datamodel.c"
]
},
{
"repo": "https://dev.iopsys.eu/iopsys/twamp.git",
"version": "master",
"dm_files": [
"datamodel.c"
]
},
{
"repo": "https://dev.iopsys.eu/iopsys/periodicstats.git",
"version": "devel",
"dm_files": [
"bbf_plugin/bbf_plugin.c"
]
}
],
"output": {
"acs": [
"default",
"hdm"
],
"file_format": [
"xls",
"xml"
],
"output_dir": "./out",
"output_file_prefix": "datamodel"
}
}