Mentions légales du service

Skip to content
Snippets Groups Projects
Commit 60ba8211 authored by Philippe Virouleau's avatar Philippe Virouleau
Browse files

Merge branch 'accesses' into 'master'

Generate access rules data in reference repository

See merge request !728
parents 5d8a603f 48007800
No related branches found
No related tags found
1 merge request!728Generate access rules data in reference repository
Pipeline #996204 passed
...@@ -27,14 +27,14 @@ include: ...@@ -27,14 +27,14 @@ include:
- apt-get update && apt-get -y --no-install-recommends install build-essential wget git ruby ruby-dev bundler rake gpg clustershell graphviz - apt-get update && apt-get -y --no-install-recommends install build-essential wget git ruby ruby-dev bundler rake gpg clustershell graphviz
# Call the original before_script section # Call the original before_script section
- !reference [.base, before_script] - !reference [.base, before_script]
# Add G5K CA certificate
- wget --no-check-certificate -q https://www.grid5000.fr/certs/ca2019.grid5000.fr.crt -O /usr/local/share/ca-certificates/ca2019.grid5000.fr.crt
- /usr/sbin/update-ca-certificates
validate-data: validate-data:
extends: .template-refrepo extends: .template-refrepo
stage: validate stage: validate
script: script:
# Add G5K CA certificate
- wget --no-check-certificate -q https://www.grid5000.fr/certs/ca2019.grid5000.fr.crt -O /usr/local/share/ca-certificates/ca2019.grid5000.fr.crt
- /usr/sbin/update-ca-certificates
- bundle exec rake valid:schema - bundle exec rake valid:schema
- bundle exec rake valid:duplicates - bundle exec rake valid:duplicates
...@@ -86,6 +86,4 @@ valid-homogeneity: ...@@ -86,6 +86,4 @@ valid-homogeneity:
stage: checks stage: checks
extends: .template-refrepo extends: .template-refrepo
script: script:
- wget --no-check-certificate -q https://www.grid5000.fr/certs/ca2019.grid5000.fr.crt -O /usr/local/share/ca-certificates/ca2019.grid5000.fr.crt
- /usr/sbin/update-ca-certificates
- bundle exec rake valid:homogeneity - bundle exec rake valid:homogeneity
Source diff could not be displayed: it is too large. Options to address this: view the blob.
Source diff could not be displayed: it is too large. Options to address this: view the blob.
# groups of ggas used by the access generator
# '%blabla' means "all the groups of the site blabla"
# '-blabla' means "remove the group blabla from the list"
# eg: mc_nancy: ['%mc-nancy', 'other_group', '-group_to_remove']
# This groups of gga are usable in this file and in the nodeset prio configuration
# with an @ prefix (eg: @mc_nancy)
# The order matter (you cannot use a group of gga before the line where it is created)
# You can combine -@ or -% but not @%.
# ADMIN GROUP WITH MAXIMUM ACCESS
## admin groups (include jenkins)
admin: ['%mc-staff-site','%g5k-staff-site']
# GROUPS FOR A SPECIFIC INRIA CENTERS
## mc group of inria rennes, also include some slices-fr groups
inria_rennes_group: ['%mc-rennes', cidre, kerdata, magellan, myriads, pacap, wide]
## mc group of inria nancy, also include some slices-fr groups
inria_nancy_group: ['%mc-nancy', coast]
# GROUP FOR ALL INRIA
## economic activity group with access to mc with the same level as other inria team
economic_activity_inria: [hive, inriastartupstudio, sonaide, feelim]
## group of guest site that have access to inria mc with the same level as other inria team
guest_inria: [inria-bso, inria-chile, inria-dsi, inria-paris, inria-sidf, tadaam, swh]
## all inria (and friends of inria) groups
inria_group: ['@inria_rennes_group', '@inria_nancy_group', '@economic_activity_inria', '@guest_inria', '%mc-sophia', '%mc-lyon', '%mc-lille', '%mc-grenoble']
# OTHER GROUP WITH (MINIMAL) ACCESS TO MC
## Group still not migrated
#FIXME as gga are migrated, this list should contains nothing in the end
unmigrated_gga_lille: [bonus, cristal, inria-lne, lille-misc, lisic, magnet, sequel, spirals]
unmigrated_gga_grenoble: [convecs, corse, datamove, erods, grenoble-misc, gricad, inria-gra, lig, ljk, mrim, nanosim2, polaris]
unmigrated_gga_lyon: [chroma, emeraude, lip, liris, lyon-misc, maracas, privatics]
unmigrated_gga_nantes: [ls2n, nantes-misc, stack]
unmigrated_gga_sophia: [i3s, inria-sam, sophia-misc, zenith]
unmigrated_gga_toulouse: [irit, laas, toulouse-misc]
unmigrated_gga: ['@unmigrated_gga_lille','@unmigrated_gga_grenoble','@unmigrated_gga_lyon','@unmigrated_gga_nantes','@unmigrated_gga_sophia', '@unmigrated_gga_toulouse']
slices_site_in_france: ['%slices-fr-strasbourg', '%slices-fr-rennes', '%slices-fr-nancy']
## group with an access to mc
other_groups_with_access: ['%mc-guest', '@unmigrated_gga', '@slices_site_in_france' ]
# This is an input file of reference-repository.git/generators/puppet/accesses.rb
---
#FIXME empenn should be removed when bug 15172 is resolved
graffiti:
p1: ['larsen', 'multispeech', 'empenn', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
grue:
p1: ['larsen','multispeech', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
# CPER IT2MP + FEDER
grappe: &CPERIT2MP
p1: ['capsid', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
grat: *CPERIT2MP
#FIXME empenn should be removed when bug 15172 is resolved
grele:
p1: ['multispeech', 'capsid', 'empenn', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
grosminet:
p1: ['caramba', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
grostiti:
p1: ['caramba', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
gruss:
p1: ['multispeech', 'tangram', 'capsid', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
grvingt:
p1: ['caramba', '@admin']
p2: ['@inria_nancy_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
# This is an input file of reference-repository.git/generators/puppet/accesses.rb
---
# Cluster of a specific team
## WIDE
roazhon1:
p1: [wide, '@admin']
p2: ['@inria_rennes_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
## HYCOMES
roazhon2:
p1: [hycomes, '@admin']
p2: ['@inria_rennes_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
## NEURINFO (empenn team)
abacus19:
p1: [empenn, '@admin']
besteffort: ['@inria_group', '@other_groups_with_access']
roazhon5:
p1: [empenn, '@admin']
p2: ['@inria_rennes_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
## SAIRPICO
roazhon7: &sairpico-shared
p1: [sairpico, '@admin']
p2: ['@inria_rennes_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
roazhon8: *sairpico-shared
abacus1: *sairpico-shared
abacus2: *sairpico-shared
## LINKMEDIA
abacus3: &linkmedia-exclusive
p1: [linkmedia, '@admin']
besteffort: ['@inria_group', '@other_groups_with_access']
abacus10: *linkmedia-exclusive
abacus18: *linkmedia-exclusive
abacus21: *linkmedia-exclusive
## SIROCCO
abacus5: &sirocco-exclusive
p1: [sirocco, '@admin']
besteffort: ['@inria_group', '@other_groups_with_access']
abacus20: *sirocco-exclusive
abacus22-A: *sirocco-exclusive
## INTUIDOC
abacus11:
p1: [intuidoc, '@admin']
p2: ['@inria_rennes_group']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
## LACODAM
abacus12:
p1: [lacodam, '@admin']
besteffort: ['@inria_group', '@other_groups_with_access']
## TARAN
abacus17:
p1: [taran, '@admin']
besteffort: ['@inria_group', '@other_groups_with_access']
## CIDRE
abacus22-B:
p1: [cidre, '@admin']
besteffort: ['@inria_group', '@other_groups_with_access']
# Common cluster of Rennes Inria Center
roazhon3: &inria-rennes
p1: ['@inria_rennes_group', '@admin']
p3: ['@inria_group']
p4: ['@other_groups_with_access']
roazhon4: *inria-rennes
roazhon6: *inria-rennes
roazhon9: *inria-rennes
roazhon10: *inria-rennes
roazhon11: *inria-rennes
roazhon12: *inria-rennes
roazhon13: *inria-rennes
abacus4: *inria-rennes
abacus8: *inria-rennes
abacus9: *inria-rennes
abacus14: *inria-rennes
abacus16: *inria-rennes
abacus25: *inria-rennes
# frozen_string_literal: true
require 'refrepo/data_loader'
require 'git'
$group_of_gga = {}
ALL_GGAS_AND_SITES = RefRepo::Utils.get_public_api('users/ggas_and_sites')
ALL_GGAS = ALL_GGAS_AND_SITES['ggas']
ALL_SITES = ALL_GGAS_AND_SITES['sites']
INPUT_FOLDER = 'input/grid5000/access'
IGNORE_SITES = %w[strasbourg]
$yaml_load_args = {}
#FIXME We cannot drop ruby 2.7 support until jenkins is on debian 11
$yaml_load_args[:aliases] = true if ::Gem::Version.new(RUBY_VERSION) >= ::Gem::Version.new('3.0.0')
# Ulgy function to order hash since order is different on ruby 2.7 and 3.x
def deep_sort_hash(hash)
sorted_hash = hash.sort.to_h
sorted_hash.each do |key, value|
sorted_hash[key] = deep_sort_hash(value) if value.is_a?(Hash)
end
sorted_hash
end
def generate_accesses_yaml(output_path, data)
output_file = File.new(output_path, 'w')
output_file.write(deep_sort_hash(data).to_yaml)
end
def generate_accesses_json(output_path, data)
output_file = File.new(output_path, 'w')
output_file.write(JSON.dump(deep_sort_hash(data)))
end
##########################################
# nodeset mode history generation #
##########################################
# calculate the prio of the node
# if only p1 is defined, the node is exclusive
def prio_mode(prio)
return 'undefined' if prio.nil? || prio.values.flatten.empty?
keys_to_check = prio.keys - ['besteffort']
only_p1_filled = keys_to_check.all? do |key|
if key == 'p1'
!prio[key].nil? && !prio[key].empty?
else
prio[key].nil? || prio[key].empty?
end
end
if only_p1_filled
"exclusive #{prio['p1'].join(',')}"
else
'shared'
end
end
def process_commits(commits, git_repo, yaml_path, nodeset_history, known_nodeset)
commits.each do |date, sha|
yaml_content = known_nodeset.to_h { |a| [a, nil] }.update(load_yaml_from_git(git_repo, sha, yaml_path))
yaml_content.each do |nodeset, prio|
nodeset_history[nodeset] ||= []
mode = prio_mode(prio)
update_history(nodeset_history, nodeset, date, mode)
known_nodeset.add(nodeset)
end
end
end
def load_yaml_from_git(git_repo, sha, yaml_path)
relative_path = yaml_path.sub(git_repo.repo.path.gsub(/\.git$/, ''), '')
YAML.load(git_repo.show("#{sha}:#{relative_path}"), **$yaml_load_args) || {}
end
# Update history only if the mode changed, if so we terminate the last entry and
# add a new one
def update_history(nodeset_history, nodeset, date, mode)
last_entry = nodeset_history[nodeset].last
return unless last_entry.nil? || (last_entry[1] == 'ACTIVE' && last_entry[2] != mode)
last_entry[1] = date.dup if last_entry
nodeset_history[nodeset] << [date.dup, 'ACTIVE', mode]
end
def generate_nodeset_mode_history
site_data_hierarchy = load_data_hierarchy
nodeset_history = {}
git_repo = Git.open(".")
diff = git_repo.diff.name_status.keys.select { |x| x.start_with?(INPUT_FOLDER) }
unless diff.empty?
abort "Please commit your changed on: #{diff.join(',')}. This generator use the git history to build history of the access mode of the nodes"
end
site_data_hierarchy['sites'].each_key do |site|
known_nodeset = Set.new
yaml_path = File.join(INPUT_FOLDER, "#{site}.yaml")
next unless File.exist?(yaml_path)
commits = git_repo.log.path(yaml_path).map { |commit| [commit.date, commit.sha] }.sort_by(&:first)
process_commits(commits, git_repo, yaml_path, nodeset_history, known_nodeset)
end
nodeset_history
end
##########################################
# access level generation #
##########################################
# Helper function
def value_and_tail_iterator(array)
Enumerator.new do |yielder|
array.each_with_index do |value, index|
yielder.yield [value, array[(index + 1)..-1]]
end
end
end
def priority_to_level(priority)
case priority
when 'p1'
40
when 'p2'
30
when 'p3'
20
when 'p4'
10
when 'besteffort'
0
end
end
def determine_access_level(expanded_ggas, gga)
value_and_tail_iterator(%w[p1 p2 p3 p4 besteffort]).each do |level, lower_levels|
next unless expanded_ggas[level]&.delete(gga)
lower_levels.each { |l| expanded_ggas[l]&.delete(gga) }
return { 'label' => level, 'level' => priority_to_level(level) }
end
{ 'label' => 'no-access', 'level' => -1 }
end
def create_access(prio, nodeset)
expanded_ggas = prio.transform_values { |x| expand_ggas(x) }
puts "Warning: No prio defined for #{nodeset}" if expanded_ggas.values.flatten.empty?
h = ALL_GGAS.map { |x| x['name'] }.sort.map do |gga|
level_info = determine_access_level(expanded_ggas, gga)
[gga, level_info]
end.to_h
expanded_ggas.each do |_, remanding_gga|
unless remanding_gga.empty?
puts "Warning: Some gga specified for the #{nodeset} nodeset do not exist: #{remanding_gga.join(',')}"
end
end
h
end
def expand_ggas(ggas)
return [] if ggas.nil?
expanded_ggas = []
ggas.each do |group|
to_remove = group.start_with?('-')
group = group[1..-1] if to_remove
if group.start_with?('%')
site = group[1..-1]
abort "Error: Unable to expand %#{site}: no site of that name" unless ALL_SITES.include?(site)
site_gga = ALL_GGAS.select { |x| x['site'] == site }.map { |x| x['name'] }
if site_gga.empty?
puts "Warning: expanding %#{site} gave no gga"
end
expanded_ggas = to_remove ? expanded_ggas - site_gga : expanded_ggas + site_gga
elsif group.start_with?('@')
group_gga = group[1..-1]
unless $group_of_gga.key?(group_gga)
abort "Error: Unable to expand @#{group_gga}: group of gga is not not defined?"
end
expanded_ggas = to_remove ? expanded_ggas - $group_of_gga[group_gga] : expanded_ggas + $group_of_gga[group_gga]
elsif to_remove
expanded_ggas.delete(group)
else
expanded_ggas << group
end
end
expanded_ggas.uniq
end
def generate_access_level
site_data_hierarchy = load_data_hierarchy
group_config_path = File.join(INPUT_FOLDER, 'group.yaml')
if File.exist?(group_config_path)
YAML.load_file(group_config_path).each do |k, v|
$group_of_gga[k] = expand_ggas(v)
end
else
# FIXME: put some "skip" in the yaml, and remove useless yamls.
puts 'Warning: Skipping group configuration since there is no file'
end
nodesets = {}
site_data_hierarchy['sites'].each_key do |site|
site_config_path = File.join(INPUT_FOLDER, "#{site}.yaml")
if File.exist?(site_config_path)
yaml_access_file = YAML.load_file(site_config_path, **$yaml_load_args)
unless yaml_access_file
puts "Warning: #{site} configuration is present but empty"
next
end
nodesets.update(yaml_access_file) unless IGNORE_SITES.include?(site)
end
end
unspecified_nodesets = all_nodesets - nodesets.keys
overspecified_nodesets = nodesets.keys - all_nodesets
abort "Some nodeset are not configure: #{unspecified_nodesets.join(', ')}" unless unspecified_nodesets.empty?
puts "Warning: Some unkown (or not production) nodeset ARE configured : #{overspecified_nodesets.join(', ')}" unless overspecified_nodesets.empty?
nodesets.each_with_object({}) do |(nodeset, prio_input), acc|
create_access(prio_input, nodeset).each do |gga, prio|
acc[gga] = {} unless acc.key?(gga)
acc[gga][nodeset] = prio
end
end
end
def all_nodesets
site_data_hierarchy = load_data_hierarchy
nodesets = []
site_data_hierarchy['sites'].each do |_site, site_details|
site_details.fetch('clusters', {}).each do |_cluster, cluster_details|
next unless cluster_details['queues'].include?('production')
nodesets.concat(cluster_details['nodes'].map { |_, node_details| node_details['nodeset'] })
end
end
nodesets.uniq
end
# frozen_string_literal: true # frozen_string_literal: true
require 'refrepo/data_loader' require 'refrepo/accesses'
require 'git'
$group_of_gga = {}
ALL_GGAS = RefRepo::Utils.get_api('users/groups?is_gga=true')['items']
ALL_SITE = RefRepo::Utils.get_api('users/groups?is_site=true')['items'].map { |x| x['name'] }
$yaml_load_args = {}
#FIXME We cannot drop ruby 2.7 support until jenkins is on debian 11
$yaml_load_args[:aliases] = true if ::Gem::Version.new(RUBY_VERSION) >= ::Gem::Version.new('3.0.0')
# Ulgy function to order hash since order is different on ruby 2.7 and 3.x
def deep_sort_hash(hash)
sorted_hash = hash.sort.to_h
sorted_hash.each do |key, value|
sorted_hash[key] = deep_sort_hash(value) if value.is_a?(Hash)
end
sorted_hash
end
# generate nodeset mode history and access level # generate nodeset mode history and access level
def generate_puppet_accesses(options) def generate_puppet_accesses(options)
options[:conf_dir] = "#{options[:output_dir]}/platforms/production/generators/access" unless options[:conf_dir] access_mode_history = generate_nodeset_mode_history
access_level = generate_access_level
access_mode_history = generate_nodeset_mode_history(options)
access_level = generate_access_level(options)
output_file_path = "#{options[:output_dir]}/platforms/production/modules/generated/files/grid5000/accesses/" output_file_path = "#{options[:output_dir]}/platforms/production/modules/generated/files/grid5000/accesses/"
generate_accesses_yaml(File.join(output_file_path, 'accesses_mode_history.yaml'), access_mode_history) generate_accesses_yaml(File.join(output_file_path, 'accesses_mode_history.yaml'), access_mode_history)
...@@ -40,211 +16,3 @@ def generate_puppet_accesses(options) ...@@ -40,211 +16,3 @@ def generate_puppet_accesses(options)
end.to_h end.to_h
generate_accesses_yaml(File.join(output_file_path, 'accesses.yaml'), filtered_access_level) generate_accesses_yaml(File.join(output_file_path, 'accesses.yaml'), filtered_access_level)
end end
def generate_accesses_yaml(output_path, data)
output_file = File.new(output_path, 'w')
output_file.write(deep_sort_hash(data).to_yaml)
end
def generate_accesses_json(output_path, data)
output_file = File.new(output_path, 'w')
output_file.write(JSON.dump(deep_sort_hash(data)))
end
##########################################
# nodeset mode history generation #
##########################################
# calculate the prio of the node
# if only p1 is defined, the node is exclusive
def prio_mode(prio)
return 'undefined' if prio.nil? || prio.values.flatten.empty?
keys_to_check = prio.keys - ['besteffort']
only_p1_filled = keys_to_check.all? do |key|
if key == 'p1'
!prio[key].nil? && !prio[key].empty?
else
prio[key].nil? || prio[key].empty?
end
end
if only_p1_filled
"exclusive #{prio['p1'].join(',')}"
else
'shared'
end
end
def process_commits(commits, git_repo, yaml_path, nodeset_history, known_nodeset)
commits.each do |date, sha|
yaml_content = known_nodeset.to_h { |a| [a, nil] }.update(load_yaml_from_git(git_repo, sha, yaml_path))
yaml_content.each do |nodeset, prio|
nodeset_history[nodeset] ||= []
mode = prio_mode(prio)
update_history(nodeset_history, nodeset, date, mode)
known_nodeset.add(nodeset)
end
end
end
def load_yaml_from_git(git_repo, sha, yaml_path)
relative_path = yaml_path.sub(git_repo.repo.path.gsub(/\.git$/, ''), '')
YAML.load(git_repo.show("#{sha}:#{relative_path}"), **$yaml_load_args) || {}
end
# Update history only if the mode changed, if so we terminate the last entry and
# add a new one
def update_history(nodeset_history, nodeset, date, mode)
last_entry = nodeset_history[nodeset].last
return unless last_entry.nil? || (last_entry[1] == 'ACTIVE' && last_entry[2] != mode)
last_entry[1] = date.dup if last_entry
nodeset_history[nodeset] << [date.dup, 'ACTIVE', mode]
end
def generate_nodeset_mode_history(options)
site_data_hierarchy = load_data_hierarchy
nodeset_history = {}
git_repo = Git.open(options[:conf_dir])
diff = git_repo.diff.name_status.keys.select { |x| x.start_with?('generators/access/') }
unless diff.empty?
abort "Please commit your changed on: #{diff.join(',')}. This generator use the git history to build history of the access mode of the nodes"
end
site_data_hierarchy['sites'].each_key do |site|
known_nodeset = Set.new
yaml_path = File.join(options[:conf_dir], "#{site}.yaml")
next unless File.exist?(yaml_path)
commits = git_repo.log.path(yaml_path).map { |commit| [commit.date, commit.sha] }.sort_by(&:first)
process_commits(commits, git_repo, yaml_path, nodeset_history, known_nodeset)
end
nodeset_history
end
##########################################
# access level generation #
##########################################
# Helper function
def value_and_tail_iterator(array)
Enumerator.new do |yielder|
array.each_with_index do |value, index|
yielder.yield [value, array[(index + 1)..-1]]
end
end
end
def priority_to_level(priority)
case priority
when 'p1'
40
when 'p2'
30
when 'p3'
20
when 'p4'
10
when 'besteffort'
0
end
end
def determine_access_level(expanded_ggas, gga)
value_and_tail_iterator(%w[p1 p2 p3 p4 besteffort]).each do |level, lower_levels|
next unless expanded_ggas[level]&.delete(gga)
lower_levels.each { |l| expanded_ggas[l]&.delete(gga) }
return { 'label' => level, 'level' => priority_to_level(level) }
end
{ 'label' => 'no-access', 'level' => -1 }
end
def create_access(prio, nodeset)
expanded_ggas = prio.transform_values { |x| expand_ggas(x) }
puts "Warning: No prio defined for #{nodeset}" if expanded_ggas.values.flatten.empty?
h = ALL_GGAS.map { |x| x['name'] }.sort.map do |gga|
level_info = determine_access_level(expanded_ggas, gga)
[gga, level_info]
end.to_h
expanded_ggas.each do |_, remanding_gga|
unless remanding_gga.empty?
puts "Warning: Some gga specified for the #{nodeset} nodeset do not exist: #{remanding_gga.join(',')}"
end
end
h
end
def expand_ggas(ggas)
return [] if ggas.nil?
expanded_ggas = []
ggas.each do |group|
to_remove = group.start_with?('-')
group = group[1..-1] if to_remove
if group.start_with?('%')
site = group[1..-1]
abort "Error: Unable to expand %#{site}: no site of that name" unless ALL_SITE.include?(site)
site_gga = ALL_GGAS.select { |x| x['site'] == site }.map { |x| x['name'] }
expanded_ggas = to_remove ? expanded_ggas - site_gga : expanded_ggas + site_gga
elsif group.start_with?('@')
group_gga = group[1..-1]
unless $group_of_gga.key?(group_gga)
abort "Error: Unable to expand @#{group_gga}: group of gga is not not defined?"
end
expanded_ggas = to_remove ? expanded_ggas - $group_of_gga[group_gga] : expanded_ggas + $group_of_gga[group_gga]
elsif to_remove
expanded_ggas.delete(group)
else
expanded_ggas << group
end
end
expanded_ggas.uniq
end
def generate_access_level(options)
site_data_hierarchy = load_data_hierarchy
group_config_path = File.join(options[:conf_dir], 'group.yaml')
if File.exist?(group_config_path)
YAML.load_file(group_config_path).each do |k, v|
$group_of_gga[k] = expand_ggas(v)
end
else
puts 'Warning: Skipping group configuration since there is no file'
end
nodesets = {}
site_data_hierarchy['sites'].each_key do |site|
site_config_path = File.join(options[:conf_dir], "#{site}.yaml")
if File.exist?(site_config_path)
yaml_access_file = YAML.load_file(site_config_path, **$yaml_load_args)
nodesets.update(yaml_access_file) unless yaml_access_file.nil?
else
puts "Warning: Skipping #{site} configuration since there is no file"
end
end
unspecified_nodesets = all_nodesets - nodesets.keys
overspecified_nodesets = nodesets.keys - all_nodesets
abort "Some nodeset are not configure: #{unspecified_nodesets.join(', ')}" unless unspecified_nodesets.empty?
puts "Warning: Some unkown (or not production) nodeset ARE configured : #{overspecified_nodesets.join(', ')}" unless overspecified_nodesets.empty?
nodesets.each_with_object({}) do |(nodeset, prio_input), acc|
create_access(prio_input, nodeset).each do |gga, prio|
acc[gga] = {} unless acc.key?(gga)
acc[gga][nodeset] = prio
end
end
end
def all_nodesets
site_data_hierarchy = load_data_hierarchy
nodesets = []
site_data_hierarchy['sites'].each do |_site, site_details|
site_details.fetch('clusters', {}).each do |_cluster, cluster_details|
next unless cluster_details['queues'].include?('production')
nodesets.concat(cluster_details['nodes'].map { |_, node_details| node_details['nodeset'] })
end
end
nodesets.uniq
end
require 'refrepo/valid/input/schema' require 'refrepo/valid/input/schema'
require 'refrepo/valid/homogeneity' require 'refrepo/valid/homogeneity'
require 'refrepo/accesses'
# Creation du fichier network_equipment # Creation du fichier network_equipment
def create_network_equipment(network_uid, network, refapi_path, site_uid = nil) def create_network_equipment(network_uid, network, refapi_path, site_uid = nil)
...@@ -38,6 +39,8 @@ def generate_reference_api ...@@ -38,6 +39,8 @@ def generate_reference_api
global_hash.delete('ipv6') global_hash.delete('ipv6')
# remove management_tools info # remove management_tools info
global_hash.delete('management_tools') global_hash.delete('management_tools')
# remove accesses
global_hash.delete('access')
grid_path = Pathname.new(refapi_path) grid_path = Pathname.new(refapi_path)
grid_path.mkpath() grid_path.mkpath()
...@@ -46,10 +49,14 @@ def generate_reference_api ...@@ -46,10 +49,14 @@ def generate_reference_api
global_hash.reject {|k, _v| k == "sites" || k == "network_equipments" || k == "disk_vendor_model_mapping"}) global_hash.reject {|k, _v| k == "sites" || k == "network_equipments" || k == "disk_vendor_model_mapping"})
end end
accesses_path = Pathname.new(refapi_path).join("accesses")
puts "Generating the reference api:\n\n" puts "Generating the reference api:\n\n"
puts "Removing data directory:\n" puts "Removing data directory:\n"
FileUtils.rm_rf(Pathname.new(refapi_path).join("sites")) FileUtils.rm_rf(Pathname.new(refapi_path).join("sites"))
FileUtils.rm_rf(Pathname.new(refapi_path).join("network_equipments")) FileUtils.rm_rf(Pathname.new(refapi_path).join("network_equipments"))
FileUtils.rm_rf(accesses_path)
puts "Done." puts "Done."
# Generate global network_equipments (renater links) # Generate global network_equipments (renater links)
...@@ -157,15 +164,34 @@ def generate_reference_api ...@@ -157,15 +164,34 @@ def generate_reference_api
end end
# # Generate the json containing all accesses level.
# Write the all-in-one json file accesses_path.mkpath()
# generate_accesses_json(
accesses_path.join("all.json"),
# rename entry for the all-in-on json file generate_access_level
global_hash["sites"].each do |_site_uid, site| )
site["network_equipments"] = site.delete("networks")
end
# Generate the all-in-one json with just enough information for resources-explorer.
all_in_one_hash = {
"sites" => global_hash["sites"].to_h do |site_uid, site|
[site_uid, {
"uid" => site_uid,
"clusters" => site["clusters"].to_h do |cluster_uid, cluster|
[cluster_uid, {
"uid" => cluster_uid,
"queues" => cluster["queues"],
"nodes" => cluster["nodes"].to_h do |node_uid, node|
[node_uid, node.select { |key| %w[uid nodeset gpu_devices processor architecture].include?(key) }]
end
}]
end
}]
end
}
# Write global json file - Disable this for now, see https://www.grid5000.fr/w/TechTeam:CT-220 # Write the global json file.
#write_json(grid_path.join(File.expand_path("../../#{global_hash['uid']}-all.json", File.dirname(__FILE__))), global_hash) # Writing the file at the root of the repository makes the full refrepo show
# up when GET-ing "/" in g5k-api, which we don't want; arbitrarily put it in accesses.
write_json(Pathname.new(refapi_path).join("accesses", "refrepo.json"), all_in_one_hash)
end end
...@@ -19,6 +19,11 @@ module RefRepo::Utils ...@@ -19,6 +19,11 @@ module RefRepo::Utils
return JSON::parse(d) return JSON::parse(d)
end end
def self.get_public_api(path, version='stable')
d = URI.open("https://public-api.grid5000.fr/#{version}/#{path}").read
return JSON::parse(d)
end
def self.get_sites def self.get_sites
return (Dir::entries('input/grid5000/sites') - ['.', '..']).sort return (Dir::entries('input/grid5000/sites') - ['.', '..']).sort
end end
......
...@@ -8,3 +8,4 @@ ipv6: required_hash ...@@ -8,3 +8,4 @@ ipv6: required_hash
software: required_hash software: required_hash
disk_vendor_model_mapping: required_hash disk_vendor_model_mapping: required_hash
management_tools: required_hash management_tools: required_hash
access: required_hash
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment