Mentions légales du service
Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
R
reference-repository
Manage
Activity
Members
Labels
Plan
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Admin message
GitLab upgrade completed. Current version is 17.11.4.
Show more breadcrumbs
grid5000
reference-repository
Commits
30457b12
Commit
30457b12
authored
6 years ago
by
Lucas Nussbaum
Browse files
Options
Downloads
Patches
Plain Diff
[dev] add a data_loader alternative to input_loader
parent
4bfb968e
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
lib/refrepo/data_loader.rb
+41
-0
41 additions, 0 deletions
lib/refrepo/data_loader.rb
with
41 additions
and
0 deletions
lib/refrepo/data_loader.rb
0 → 100644
+
41
−
0
View file @
30457b12
# Load a hierarchy of JSON file into a Ruby hash
require
'refrepo/hash/hash'
def
load_data_hierarchy
global_hash
=
{}
# the global data structure
directory
=
File
.
expand_path
(
"../../data/grid5000/"
,
File
.
dirname
(
__FILE__
))
Dir
.
chdir
(
directory
)
do
# Recursively list the .yaml files.
# The order in which the results are returned depends on the system (http://ruby-doc.org/core-2.2.3/Dir.html).
# => List deepest files first as they have lowest priority when hash keys are duplicated.
list_of_files
=
Dir
[
'**/*.json'
].
sort_by
{
|
x
|
-
x
.
count
(
'/'
)
}
list_of_files
.
each
do
|
filename
|
# Load JSON
file_hash
=
JSON
::
parse
(
IO
::
read
(
filename
))
# Inject the file content into the global_hash, at the right place
path_hierarchy
=
File
.
dirname
(
filename
).
split
(
'/'
)
# Split the file path (path relative to input/)
path_hierarchy
=
[]
if
path_hierarchy
==
[
'.'
]
if
[
'nodes'
,
'network_equipments'
].
include?
(
path_hierarchy
.
last
)
# it's a node or a network_equipment, add the uid
path_hierarchy
<<
file_hash
[
'uid'
]
end
file_hash
=
Hash
.
from_array
(
path_hierarchy
,
file_hash
)
# Build the nested hash hierarchy according to the file path
global_hash
=
global_hash
.
deep_merge
(
file_hash
)
# Merge global_hash and file_hash. The value for entries with duplicate keys will be that of file_hash
# Expand the hash. Done at each iteration for enforcing priorities between duplicate entries:
# ie. keys to be expanded have lowest priority on existing entries but higher priority on the entries found in the next files
global_hash
.
expand_square_brackets
(
file_hash
)
end
end
return
global_hash
end
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment