Define your base classes to be reusable, if possible provide reasonable defaults for some attributes. Depending on your policy, the defauls might be more suited for production environment, or for test environment. If there are some sets of features, that require modification of multiple attributes, or somehow plug in and modify other attributes, use mixins. Try to design mixins so they can be reused with different base classes, and with other mixins. It sometimes makes sense to decopoul the attribute names and their semantics, from what ends in the end config. Just provide a default in a base class, and a conversion function in the base class, that will conver it into end config. For example, it often is useful to flatten the structure. I.e. instead of few attributes, that are complex dictionaries of other attributes, flatten them, so they are easy to override on one by one basis in descendent classes. Then provide a function to take these attributes and combine them into desired complex structure. Because the function itself can be also overriden (and the override can call back the base version too), you can still in rare ocassions easily introduce modifications. Annotate in base classes (and sometimes in mixins), attributes. Be it required, required_primary, @private, type and validators. When overriding a attribute, with intent to just add something extra, instead of overwriting the value, if possible use appending (for lists), add_kvs (for dicts), or merging (for more complex structs). They are easy to read, and provide shorter syntax. When implementing custom overrides, and dealing with complex objects, do not modify the original value received from the base class. For example, instead return a new value (I.e. append to a list using + operator, or use sorted(), instead of appending in place using append() or using .sort(); for dicts, do a deepcopy before adding or deleting keys, or changing values). The reason is that while this works fine for some simple cases, it will break in more complex situations. Consider this: default_a = {'k': 'foo'} class A(YaclBaseClass, ABC): def a(): return default_a class B(A): def a(): return get().update({'z': 'bar'}) class C(A): pass While this looks correct it is not. get() will return the reference to a default_a, and update will change a global state! If the B is evaluate multiple times (with different contexts), the results will be bad. Additionally because get() calls are cached, the results might be incorrect even during a evaluation in a single context: class B(A): def b(): return get('a').update({'z': 'bar'}) def c(): return get('a').update({'x': 'moo'}) Because order of evaluation of attributes is undefined, b and c, might have unpredictable values, like b == {'k': 'foo', 'z': 'bar'}, c == {'k': 'foo', 'z': 'bar', 'x': 'moo'}. Don't do that, and use deepcopy or return a new value: class B(A): def b(): return deepcopy(get('a')).update({'z': 'bar'}) def c(): return deepcopy(get('a')).update({'x': 'moo'}) # for lists: class B(A): def b(): return get('a') + [5] def c(): return get('a') + [6] Alternatively use helper functions and tools provided by yacl. class B(A): a = appending([5]) class B(A): def a(): return get() + [5] # (The two above are equivalent, and are both safe to use in all cases). Do not hardcode too much data that can vary often or might is expected to be changed during the lifetime of the component, in the yacl file itself. Put it in the context instead. Things like locations (i.e. data centers, regions), number of servers, etc. If you scale your system primarly by adjusting number of locations and servers, it works well. However, things that you expect to be the same (like memory per server, or the dns name of a backend server, etc) keep in the yacl part (remember that you can still dynamically decide in yacl, how to generate them, based on a context. So for example to use different backend dns name for prod, qa and test jobs.). It many cases it is good to put multiple components in a single yacl file, at least their top level references. This will simplify management, code review, refactors, and makes it easier to get overview of all components of the part of the system. If all components are used only by one service (i.e. frontend, backend, database server, and dedicated monitoring jobs for them), and they are managed (or of interest) by one team in some sense, put them all in one file. This way you can also ensure dependencies are fullfilled easily by inspection and tests. I.e. that there are monitoring jobs in every datacenter that have backend and/or frontend server. Do not reference external systems by their IP address, or even their DNS names, or other explicit references in the config or context. Introduce additional level of indirection in the yacl itself. I.e. create a logical name for the service (i.e. name of the service, and the scope, for example 'shared-pubsub-publisher-prod'), then use a central shared registry that with common helper functions to conver it to app specific format, (I.e. http, dns, etc.). This way naming conventions can be easily enforced, migrations are easier, and it is also easy to figure out who is using the service, before it being shutdown for example, or when it has highly critical changes, even if there is almost no traffic between them. The helpers can even do more things, and enhance automated tests, for example to check that RPC ACLs are set correctly, that the collocation policies are respected, etc. It is better to discover these problems early. Use 'Mixin' suffix for mixins. Do not reference external systems from your yacl files. It should run just fine with offline with no internet access, with flat raw context provided fromt he same source code repository it is running from. This ensure easy rollbacks, troubleshooting, incident response, and state tracking. If you want to reference external systems. Simply have a trigger based job, or periodic job that pull this data somehow into your context file into same source code repository. You shouldn't be reading files or doing any other io in the yacl directly either. Do it before invoking yacl, and bring all the data into the context instead. In the future yacl might enforce this by running Python in a sandbox. Do not over use Python features. Don't use meta classes, complex decorators, or too many third party libraries, etc. Do not use / avoid, using of templating languages in yacl. Use simple f-strings, string concatenations, list joins, etc to create neeeded pieces. Then if required by the target config format, run a output mapper, that takes these well structures Python constructs, and formats them into end format (even then use of templates like django or jinga2 should be really avoided, with explicit Python code prefered more). Templates often hide a lot of logic, and use own language, with slightly different semantic, making it harder to troubleshoot, or be modified, especially by new users or team members. If possible try to fit simple values on one line. To this effects explore use of appending, merging, add_kvs, literals, ternary_eq, ternary_in, and inline lambda: i.e. prefer this: tags = appending(['t1', 't2']) if you need some conditionls try this: in_eu_or_us = ternary_in('region', ['us', 'eu'], True, False) or lambdas: in_eu_or_us = lambda: get('region') in ['us', 'eu'] Usage of if/else expression is also nice: where = lambda: 'us_eu' if get('region') in ['us', 'eu'] else 'elsewhere' Do not use global state of any kind, unless it is read only state initialized once at the script load. Do not use lazy initalization, modifiable state, or state with side effects (including I/O, networking, reading or writing to databases). In the future these might be enforced. Do not use stateful decorators that might be changing things. '@memoize' is especially dangerous. If the function being memoized depends in anyway on other attributes or context (it calls transitively 'get'), it will lead to wrong results. If it doesn't, it often is better to simply precompute the result at the module level, and refernce that (with optional deepcopy). Do not use '@staticmethod'. All attributes are static in yacl, and using extra decorator might interfer with the evaluator. Do not use '@property'. The evaluator automatically converts all attributes to properties in a sense already and all references are uniform already, no mater if the attribute is a value or a function, and using '@property' will interfere with the evaluator. Use of '@private' should be fine, but is untested, and often unecassary, or could interfer with inheritance in the evluator. Usually the base class in your chain will define output module (i.e. based on whitelist of keys, or a protocol buffer definition), all the other attributes will be ignored. Adding '@private' just make everything take more space in the source code, for no real benefit. It is fine to use full power of Python and all modules temporarily, including statefull objects, as long as they are confined to a top level object or used privately inside the attribute, and it doesn't leak. However, things like IO, network connections and random number generation should be avoided at all cost. IO (even read only) and networking should be moved to a context part, or performed outside of yacl and stored as a read only text file that is merged into a context. Random number generation should be replaced with stable and repetable method. A good option is to use a hash of some other attributes to derive a stable pseudo-random number. Similarly don't use 'time.time()' (other than for debugging or doing performance tests), or 'now()' in your attributes. You can use 'yacl_base.start_time' to get a timestamp when the yacl started, but even then that should be avoided. Do not use templating language, like jinja, etc for complex logic. Often using f-strings, or manually concating strings is better. One way for doing iterations, is define some extra attributes, to generate lists, sets and dictionaries, then consume them in some other attribute and output. This follow more modular approach, just like normal programming language will do, separation of concern, ability to use helper function by many different configs, and use other useful Python features, like default arguments, key-value arguments, list comprehension, Python string processing libraries, math functions, dicts, etc. The ability to use separate attributes, allows also easier sub-classing using yacl features, adding documentation strings (that can be useful by both humans and automated tools to generate "documentation"). Also jinja code has slightly different semantics compared to Python, has restrictions on functions and filters, is slow, and is designed mostly for HTML templating (i.e. it has automatic HTML escaping), has configurable syntax (which is deterimental to learning other people configs), often requires extra pre-processing step ("compilation"), and is not super fast. It is not suitable for configs in the long term. If you want to generate some HTML string / content as part of your config, by all means, be free to call jinja or django from your yacl attributes. The power of yacl and Python is enough to replace jinja fully (and more) in almost all config use cases (including integration with Ansible). It is recommnded to name all your attributes using small alphanumeric identifiers, just like you would do for Python methods, functions and variables. If you the output format requires different names, just provide some mapping dictionary and use it in your _execute method.