package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
zcgtools | No description available on PyPI. |
zChainer | scikit-learn like interface and stacked autoencoder for chainerRequirementsnumpyscikit-learnchainer >= 1.5Installationpip install zChainerUsageAutoencoderimportnumpyasnpimportchainer.functionsasFimportchainer.linksasLfromchainerimportChainList,optimizersfromzChainerimportNNAutoEncoder,utilitydata=(..).astype(np.float32)encoder=ChainList(L.Linear(784,200),L.Linear(200,100))decoder=ChainList(L.Linear(200,784),L.Linear(100,200))# You can set your own forward function. Default is as below.#def forward(self, x):# h = F.dropout(F.relu(self.model[0](x)))# return F.dropout(F.relu(self.model[1](h)))##NNAutoEncoder.forward = forwardae=NNAutoEncoder(encoder,decoder,optimizers.Adam(),epoch=100,batch_size=100,log_path="./ae_log_"+utility.now()+".csv",export_path="./ae_"+utility.now()+".model")ae.fit(data)Training and Testingimportnumpyasnpimportchainer.functionsasFimportchainer.linksasLfromchainerimportChainList,optimizersfromzChainerimportNNManager,utilityimportpickleX_train=(..).astype(np.float32)y_train=(..).astype(np.int32)X_test=(..).astype(np.float32)y_test=(..).astype(np.int32)# Create a new networkmodel=ChainList(L.Linear(784,200),L.Linear(200,100),L.Linear(100,10))# or load a serialized model#f = open("./ae_2015-12-01_11-26-45.model")#model = pickle.load(f)#f.close()#model.add_link(L.Linear(100,10))defforward(self,x):h=F.relu(self.model[0](x))h=F.relu(self.model[1](h))returnF.relu(self.model[2](h))defoutput(self,y):y_trimed=y.data.argmax(axis=1)returnnp.array(y_trimed,dtype=np.int32)NNManager.forward=forwardNNManager.output=outputnn=NNManager(model,optimizers.Adam(),F.softmax_cross_entropy,epoch=100,batch_size=100,log_path="./training_log_"+utility.now()+".csv")nn.fit(X_train,y_train,is_classification=True)nn.predict(X_test,y_test) |
zcheck | zcheck is a command-line utility to check the configuration of a
production Zenko deployment and diagnose problems in it.Pre-Requisiteszcheck requires aHelminstallation that is configured to access Tiller running inside
Kubernetes.Installationzcheck can be installed directly from PyPi using Pip:pip install zcheckA Docker image is also provided for convenience.docker pull zenko/zcheck:latest
docker run -it zenko/zcheck helpSyntaxzcheck commands conform to the following syntax:zcheck <global option> <subcommand> <-flag or --verbose_option> <optional target>Global Options--mongo Override the default Mongo connection string (host:port)
-r, --helm-release The Helm release name under which Zenko was installed.SubcommandscheckupRun all checks and tests (may take a while).k8sCheck Kubernetes-related configuration.-c, --check-services Attempt to connect to defined services and report their status.orbitCheck overlay configuration applied via Orbit.backendsCheck existence and configuration of backend buckets.-d, --deep Enable deep checking. Check every Zenko bucket for its backing bucket
(same as zcheck buckets)bucketsCheck every Zenko bucket for its backend bucket. |
zc.htmlchecker | HTML/DOM CheckerWhen testing code (like widgets) that generates DOM nodes, we want to
be able to make assertions about what matters. Examples of things we’d
like to ignore:attribute orderextra attributesattribute orderextra classesextra nodeszc.htmlchecker provides a checker object that can be used by itself,
or as a doctest output checker.ContentsHTML/DOM CheckerGetting starteddoctest CheckerExpecting multiple nodesWildcardsYou can use other BeautifulSoup parsersChangesGetting startedLet’s look at some examples.Here’s a sample expected string:<body>
<button class="mybutton">press me</button>
</body>Let’s create a checker:>>> import zc.htmlchecker
>>> checker = zc.htmlchecker.HTMLChecker()You can call its check method with expected and observed HTML:>>> checker.check(
... expected,
... """<html><body><button x='1' class="widget mybutton">press me</button>
... </body></html>""")If there’s a match, then nothing is returned. For there to be a
match, the expected output merely has to be unambiguously found in the
observed output. In the above example, there was a single body tag,
so it knew how to do the match. Note that whitespace differences were
ignored, as were extra observed attributes and an extra class.doctest CheckerTo usezc.htmlcheckeras a doctest checker, pass an instance ofHTMLCheckeras an output checker when setting up your doctests.When used as a doctest checker, expected text that doesn’t start with<is checked with the default checker, or a checker you pass in as
base.You may want to have some html examples checked with another
checker. In that case, you can specify a prefix. Only examples that
begin with the prefix will be checked with the HTML checker, and the
prefix will be removed.Expecting multiple nodesWe can expect more than a single node:<button>Cancel</button>
<button>Save</button>This example expects 2 button nodes somewhere in the output.>>> checker.check(expected,
... """<html><body>
... <button id='cancel_button' class="button">Cancel</button>
... <button id='save_button' class="button">Save</button>
... </body></html>""")But if there isn’t a match, it can be harder to figure out what’s
wrong:>>> checker.check(expected,
... """<html><body>
... <button id='cancel_button' class="button">Cancel</button>
... <button id='save_button' class="button">OK</button>
... </body></html>""")
Traceback (most recent call last):
...
MatchError: Couldn't find wildcard match
Expected:
<button>
Save
</button>
<BLANKLINE>
Observed:
<html>
<body>
<button class="button" id="cancel_button">
Cancel
</button>
<button class="button" id="save_button">
OK
</button>
</body>
</html>We’ll come back to wild card matches in a bit. Here, the matcher
detected that it didn’t match a button, but couldn’t be specific about
which button was the problem. We can make its job easier using ids:<button id='cancel_button'>Cancel</button>
<button id='save_button'>Save</button>Now we’re looking for button nodes with specific ids.>>> checker.check(expected,
... """<html><body>
... <button id='cancel_button' class="button">Cancel</button>
... <button id='save_button' class="button">OK</button>
... </body></html>""")
Traceback (most recent call last):
...
MatchError: text nodes differ u'Save' != u'OK'
Expected:
<button id="save_button">
Save
</button>
<BLANKLINE>
Observed:
<button class="button" id="save_button">
OK
</button>
<BLANKLINE>That’s a lot more helpful.WildcardsSpeaking of wild card matches, sometimes you want to ignore
intermediate nodes. You can do this by using an ellipsis at the top of
a node that has intermediate nodes you want to ignore:<form>
...
<button id='cancel_button'>Cancel</button>
<button id='save_button'>Save</button>
</form>In this case, we want to find button nodes inside a form node. We
don’t care if there are intermediate nodes.>>> checker.check(expected,
... """<html><body>
... <form>
... <div>
... <button id='cancel_button' class="button">Cancel</button>
... <button id='save_button' class="button">Save</button>
... </div>
... </form>
... </body></html>""")When looking for expected text, we basically do a wild-card match on
the observed text.Sometimes, we want to check for text nodes that may be embedded in
some generated construct that we can’t control (like a grid produced
by a library). To do that, include a text node that starts with a
line containing an ellipsis. For example, we may expect a grid/table
with some data:<div id="mygrid" name="">
...
Name Favorite Color
Sally Red
Bill Blue
</div>We don’t know exactly how our library is going to wrap the data, so we
just test for the presense of the data.>>> import sys
>>> try: checker.check(expected,
... """<html><body>
... <div id='mygrid' name='' xid="1">
... <table>
... <tr><th>Name</th><th>Favorite Color</th></tr>
... <tr><td>Sally</td><td>Red </td></tr>
... <tr><td>Bill </td><td>Green</td></tr>
... </table>
... </div>
... </body></html>""")
... except zc.htmlchecker.MatchError:
... error = sys.exc_info()[1]
... else: print 'oops'
>>> print error # doctest: +ELLIPSIS
Blue not found in text content.
...>>> checker.check(expected,
... """<html><body>
... <div id='mygrid' name='' xid="1">
... <table>
... <tr><th>Name</th><th>Favorite Color</th></tr>
... <tr><td>Sally</td><td>Red </td></tr>
... <tr><td>Bill </td><td>Blue</td></tr>
... </table>
... </div>
... </body></html>""")You can use other BeautifulSoup parsersHTMLChecker uses BeautifulSoup. It uses the'html5lib'parser by
default, but you can pass a different parser name. You probably want
to stere clear of the'html.parser'parser, as it’s buggy:>>> checker = zc.htmlchecker.HTMLChecker(parser='html.parser')
>>> checker.check('<input id="x">', '<input id="x"><input>')
Traceback (most recent call last):
...
MatchError: Wrong number of children 1!=0
Expected:
<input id="x"/>
<BLANKLINE>
Observed:
<input id="x">
<input/>
</input>Here,'html.parser'decided that the input tags needed closing
tags, even though the HTML input tag is empty. This is likely in part
because the underlying parser is an XHTML parser.Changes0.1.0 2013-08-31Initial release. |
zci | This is pretty much an idea at this stage. More details to follow. |
zc.i18n | This package provides additional I18n and L10n features. In particular it
provides an API to compute the time duratrions over various timezones.Detailed DcoumentationTime Duration ComputationThe duration format code is not ideal, but as the code notes, the icu library
does not appear to support internationalizing dates. Therefore, this approach
tries to get close enough to be flexible enough for most localization. Only
time, and localizers, will tell if it is a reasonable approach.The formatter always gives the first two pertinent measures of a duration,
leaving off the rest. The rest of the file just shows some examples.>>> from zc.i18n.duration import format
>>> from zope.publisher.browser import TestRequest
>>> request = TestRequest()
>>> from datetime import timedelta
>>> format(request, timedelta(days=5))
u'5 days '
>>> format(request, timedelta(days=1))
u'1 day '
>>> format(request, timedelta(days=1, hours=13, minutes=12))
u'1 day 13 hours '
>>> format(request, timedelta(hours=13, minutes=12))
u'13 hours 12 minutes '
>>> format(request, timedelta(hours=13))
u'13 hours '
>>> format(request, timedelta(hours=1, minutes=1, seconds=1))
u'1 hour 1 minute '
>>> format(request, timedelta(minutes=45, seconds=1))
u'45 minutes 1 second'
>>> format(request, timedelta(seconds=5))
u'5 seconds'
>>> format(request, timedelta(days=-1, hours=-2))
u'-1 day -2 hours '
>>> format(request, timedelta(days=-2, hours=22))
u'-1 day -2 hours '
>>> format(request, timedelta(days=-1))
u'-1 day '
>>> format(request, timedelta(days=-1, hours=-13, minutes=-12))
u'-1 day -13 hours '
>>> format(request, timedelta(hours=-13, minutes=-12))
u'-13 hours -12 minutes '
>>> format(request, timedelta(hours=-13))
u'-13 hours '
>>> format(request, timedelta(hours=-1, minutes=-1, seconds=-1))
u'-1 hour -1 minute '
>>> format(request, timedelta(minutes=-45, seconds=-1))
u'-45 minutes -1 second'
>>> format(request, timedelta(seconds=-5))
u'-5 seconds'
>>> format(request, timedelta())
u'No time'CHANGES0.7.0 (2009-07-24)Fixed tests to work with latest package versions.The buildout now also pulls in the test extras, which is required.0.6.1 (2008-05-20)No code changes, and only a very minor documentation tweak.
Re-released to avoid confusion over package versions found in the wild.0.5.2 (2007-11-03)Improve package data.0.5.1 (2006-05-24)Package data update.0.5.0 (2006-05-24)Initial release. |
zc.iakovenkobuildout | This is a security placeholder package.
If you want to claim this name for legitimate purposes,
please contact us [email protected]@yandex-team.ru |
zc.icp | In multi-machine (or multi-process) web server installations some set of web
servers will likely be more able to quickly service an HTTP request than
others. HTTP accelerators (reverse proxies) likeSquidcan useICPqueries
to find the most appropriate server(s) to handle a particular request. This
package provides a small UDP server that can respond to ICP queries based on
pluggable policies.[ICP]http://www.faqs.org/rfcs/rfc2186.html[Squid]http://www.squid-cache.org/Change history1.0.0 (2008-02-07)Initial release.When ICP is UsefulWhen generating content dynamically, having all the data available locally to
fulfil a request can have a profound effect on service time. One approach to
having the data available is to have one or more caches. In some situations
those caches are not large enough to contain the entire working set required
for efficient servicing of incoming requests. Adding additional request
handlers (servers or processes) doesn’t help because the time to load the data
from one or more storage servers (e.g., databases) is the dominant factor in
request time. In those situations the request space can be partitioned such
that the portion of the working set a particular handler (server or process) is
responsible for can fit in its cache(s).Statically configuring request space partitioning may be difficult,
error-prone, or even impossible. In those circumstances it would be nice to
let the origin servers provide feedback on whether or not they should handle a
particular request. That’s where ICP comes in.Hits and MissesWhen an ICP query request is received, the server can return one of ICP_OP_HIT,
ICP_OP_MISS, ICP_OP_ERR, ICP_OP_MISS_NOFETCH, or ICP_OP_DENIED. The meanings
of these result codes are defined in the ICP RFC as below.ICP_OP_HITAn ICP_OP_HIT response indicates that the requested URL exists in
this cache and that the requester is allowed to retrieve it.ICP_OP_MISSAn ICP_OP_MISS response indicates that the requested URL does not
exist in this cache. The querying cache may still choose to fetch
the URL from the replying cache.ICP_OP_ERRAn ICP_OP_ERR response indicates some kind of error in parsing or
handling the query message (e.g. invalid URL).ICP_OP_MISS_NOFETCHAn ICP_OP_MISS_NOFETCH response indicates that this cache is up,
but is in a state where it does not want to handle cache misses.
An example of such a state is during a startup phase where a cache
might be rebuilding its object store. A cache in such a mode may
wish to return ICP_OP_HIT for cache hits, but not ICP_OP_MISS for
misses. ICP_OP_MISS_NOFETCH essentially means “I am up and
running, but please don’t fetch this URL from me now.”Note, ICP_OP_MISS_NOFETCH has a different meaning than
ICP_OP_MISS. The ICP_OP_MISS reply is an invitation to fetch the
URL from the replying cache (if their relationship allows it), but
ICP_OP_MISS_NOFETCH is a request to NOT fetch the URL from the
replying cache.ICP_OP_DENIEDAn ICP_OP_DENIED response indicates that the querying site is not
allowed to retrieve the named object from this cache. Caches and
proxies may implement complex access controls. This reply must be
be interpreted to mean “you are not allowed to request this
particular URL from me at this particular time.”Because we want to use ICP to communicate about whether or not an origin server
(as opposed to a cache server) wants to handle a particular request, we will
use slightly different definitions for some of the result codes.ICP_OP_HITAn ICP_OP_HIT response indicates that the queried server would prefer to
handle the HTTP request. The reason the origin server is returning a hit
might be that it has recently handled “similar” requests, or that it has
been configured to handle the partition of the URL space occupied by the
given URL.ICP_OP_MISSAn ICP_OP_MISS response indicates that the queried server does not have a
preference to service the request, but will be able to handle the request
nonetheless. This is normally the default response.ICP_OP_MISS_NOFETCHAn ICP_OP_MISS_NOFETCH response indicates that the requesting server may
not request the named object from this server. This may be because the
origin server is under heavy load at the time or some other policy
indicates that the request must not be forwarded at the moment.The response (hit, miss, etc.) to a particular ICP query is based on one or
more configured policies. The mechanics of defining and registering those
policies is explained in the next section.This package does not implement the deprecated ICP_OP_HIT_OBJ.Defining PoliciesTo use this package one or more polices must be defined and registered. The
Zope component architecture is used to manage the polices as “utilities”.Policies must implement the IICPPolicy interface.>>> from zc.icp.interfaces import IICPPolicy
>>> IICPPolicy
<InterfaceClass zc.icp.interfaces.IICPPolicy>At this point no policy is registered, so any URL will generate a miss.>>> import zc.icp
>>> zc.icp.check_url('http://example.com/foo')
'ICP_OP_MISS'Let’s say we want to return an ICP_OP_HIT for all URLs containing “foo”, we
can define that policy like so:>>> def foo_hit_policy(url):
... if 'foo' in url:
... return 'ICP_OP_HIT'When registering this policy we have to provide an associated name. Any
subsequent registration with the same name will override the previous
registration. The default name is the empty string.>>> import zope.component
>>> zope.component.provideUtility(foo_hit_policy, IICPPolicy, 'foo')The registered policy is immediately available.>>> zc.icp.check_url('http://example.com/foo')
'ICP_OP_HIT'Non-foo URLs are still misses.>>> zc.icp.check_url('http://example.com/bar')
'ICP_OP_MISS'Now we can add another policy to indicate that we don’t want any requests with
“baz” in them.>>> def baz_hit_policy(url):
... if 'baz' in url:
... return 'ICP_OP_MISS_NOFETCH'
>>> zope.component.provideUtility(baz_hit_policy, IICPPolicy, 'baz')>>> zc.icp.check_url('http://example.com/foo')
'ICP_OP_HIT'
>>> zc.icp.check_url('http://example.com/bar')
'ICP_OP_MISS'
>>> zc.icp.check_url('http://example.com/baz')
'ICP_OP_MISS_NOFETCH'The policies are prioritized by name. The first policy to return a non-None
result is followed. Therefore if we check a URL with both “foo” and “baz” in
it, the policy for “baz” is followed.>>> zc.icp.check_url('http://example.com/foo/baz')
'ICP_OP_MISS_NOFETCH'Running the ServerStarting the server begins listening on the given port and IP.>>> zc.icp.start_server('localhost', 3130)
info: ICP server started
Address: localhost
Port: 3130Now we can start sending ICP requests and get responses back. To do so we must
first construct an ICP request.>>> import struct
>>> query = zc.icp.HEADER_LAYOUT + zc.icp.QUERY_LAYOUT
>>> url = 'http://example.com/\0'
>>> format = query % len(url)
>>> icp_request = struct.pack(
... format, 1, 2, struct.calcsize(format), 0xDEADBEEF, 0, 0, 0, 0, url)
>>> print zc.icp.format_datagram(icp_request)
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ICP_OP_QUERY | Version: 2 | Message Length: 44 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Request Number: DEADBEEF |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Option Data: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sender Host Address: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| Payload: \x00\x00\x00\x00http://example.com/\x00 |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+After sending the request we get back a response.>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
>>> s.connect(('localhost', 3130))>>> s.send(icp_request)
44
>>> icp_response = s.recv(16384)
>>> icp_response
'\x03\x02\x00(\xde\xad\xbe\xef\x00\x00\x00\x00\...http://example.com/\x00'
>>> print zc.icp.format_datagram(icp_response)
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ICP_OP_MISS | Version: 2 | Message Length: 40 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Request Number: DEADBEEF |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Option Data: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sender Host Address: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| Payload: http://example.com/\x00 |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+That was a miss. We can also provoke a hit by satisfying one of our policies.>>> url = 'http://example.com/foo\0'
>>> format = query % len(url)
>>> icp_request = struct.pack(
... format, 1, 2, struct.calcsize(format), 0xDEADBEEF, 0, 0, 0, 0, url)
>>> print zc.icp.format_datagram(icp_request)
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ICP_OP_QUERY | Version: 2 | Message Length: 47 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Request Number: DEADBEEF |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Option Data: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sender Host Address: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| Payload: \x00\x00\x00\x00http://example.com/foo\x00 |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+>>> s.send(icp_request)
47
>>> print zc.icp.format_datagram(s.recv(16384))
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ICP_OP_HIT | Version: 2 | Message Length: 43 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Request Number: DEADBEEF |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Option Data: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sender Host Address: 0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| Payload: http://example.com/foo\x00 |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
zc.intid | This package provides an API to create integer ids for any object.
Objects can later be looked up by their id as well. This is similar to
thezope.intidpackage, but it has the advantage of producing
fewer conflicts.Documentation, including installation and configuration
instructions and a detailedchangelogis hosted athttp://zcintid.readthedocs.io. |
zc.isithanging | A zc.monitor plugin for testing whether function hangsSomtimes, computation stops and it can be hard to find out why. Tools
like strace can be helpful, but are very low level. If a call hangs
calling external network services, all you might see is a select or
poll call and not what serveice was being called.Isithanging provides a simple registry and a helper function for
registering and unregistering calls. To illustrate how this, we’ll
use a test function that blocks until we unblock it by setting an
event:>>> import zc.isithanging.tests
>>> event, blocker = zc.isithanging.tests.create_blocker()The blocker function just returns any arguments it was passed.To check whether a function is blocking, we usezc.isinhanging.runto
run the function. We’ll do so here in a thread:>>> import zc.thread
>>> @zc.thread.Thread
... def thread():
... print zc.isithanging.run(blocker, 1, foo=2)There’s also a decorator that takes wraps a function and takes care of
callingrun.Let’s create seome more jobs:>>> e1, b1 = zc.isithanging.tests.create_blocker()
>>> suspect = zc.isithanging.suspect(b1)
>>> @zc.thread.Thread
... def t1():
... print suspect(1)Above, we used the suspect decorator as a function (rather than with
decorator syntax.)>>> e2, b2 = zc.isithanging.tests.create_blocker()
>>> @zc.thread.Thread
... def t2():
... print zc.isithanging.run(b2, 2)We can see what’s running by looking atzc.isithanging.running:>>> now = time.time()
>>> for r in zc.isithanging.running:
... print r.show(now)
Sun Nov 16 09:48:29 2014 1s <function f at 0x10251e500> (1,) {'foo': 2}
Sun Nov 16 09:48:29 2014 1s <function f at 0x10251e9b0> (1,) {}
Sun Nov 16 09:48:29 2014 1s <function f at 0x10251eb18> (2,) {}The show function shows start time, elapsed time in seconds, function
and arguments.When a job stops, it’s automatically unregistered:>>> e1.set(); t1.join()
((1,), {})>>> for r in zc.isithanging.running:
... print r
Sun Nov 16 09:48:29 2014 2s <function f at 0x102d1e500> (1,) {'foo': 2}
Sun Nov 16 09:48:29 2014 2s <function f at 0x102d1eb18> (2,) {}There’s a zc.monitor command that prints the jobs:>>> import sys
>>> zc.isithanging.isithanging(sys.stdout)
Sun Nov 16 09:48:29 2014 2s <function f at 0x102d1e500> (1,) {'foo': 2}
Sun Nov 16 09:48:29 2014 2s <function f at 0x102d1eb18> (2,) {}Let’s finish the jobs and try again:>>> event.set(); thread.join()
((1,), {'foo': 2})
>>> e2.set(); t2.join()
((2,), {})>>> zc.isithanging.isithanging(sys.stdout)Changes0.3.0 (2014-11-17)Added a “suspect” decorator to decorate functions suspected of hanging.0.2.0 (2014-11-17)(Accidental re-release of 0.1.)0.1.0 (2014-11-17)Initial release |
zc.iso8601 | This package collects together functions supporting the data formats described
in ISO 8601. Time zone support is provided by thepytzpackage.The following functions are provided in thezc.iso8601.parsemodule:date(s)Parse a date value that does not include time information.
Returns a Python date value.datetime(s)Parse a date-time value that does not include time-zone information.
Returns a Python datetime value.datetimetz(s)Parse a date-time value that includes time-zone information. Returns a
Python datetime value in the UTC timezone.Changes0.2.0 (2011-10-10)Addeddatefunction, for completeness.0.1.0 (2008-05-12)Initial release. |
zcj_test | No description available on PyPI. |
zc.lazylist | UNKNOWN |
zcli | No description available on PyPI. |
zclient | Zclient Readme |
zc.lockfile | The zc.lockfile package provides a basic portable implementation of
interprocess locks using lock files. The purpose if not specifically
to lock files, but to simply provide locks with an implementation
based on file-locking primitives. Of course, these locks could be
used to mediate access tootherfiles. For example, the ZODB file
storage implementation uses file locks to mediate access to
file-storage database files. The database files and lock file files
are separate files.ContentsDetailed DocumentationLock file supportHostname in lock fileChange History3.0.post1 (2023-02-28)3.0 (2023-02-23)2.0 (2019-08-08)1.4 (2018-11-12)1.3.0 (2018-04-23)1.2.1 (2016-06-19)1.2.0 (2016-06-09)1.1.0 (2013-02-12)1.0.2 (2012-12-02)1.0.1 (2012-11-30)1.0.0 (2008-10-18)1.0.0b1 (2007-07-18)Detailed DocumentationLock file supportThe ZODB lock_file module provides support for creating file system
locks. These are locks that are implemented with lock files and
OS-provided locking facilities. To create a lock, instantiate a
LockFile object with a file name:>>> import zc.lockfile
>>> lock = zc.lockfile.LockFile('lock')If we try to lock the same name, we’ll get a lock error:>>> import zope.testing.loggingsupport
>>> handler = zope.testing.loggingsupport.InstalledHandler('zc.lockfile')
>>> try:
... zc.lockfile.LockFile('lock')
... except zc.lockfile.LockError:
... print("Can't lock file")
Can't lock fileTo release the lock, use it’s close method:>>> lock.close()The lock file is not removed. It is left behind:>>> import os
>>> os.path.exists('lock')
TrueOf course, now that we’ve released the lock, we can create it again:>>> lock = zc.lockfile.LockFile('lock')
>>> lock.close()Hostname in lock fileIn a container environment (e.g. Docker), the PID is typically always
identical even if multiple containers are running under the same operating
system instance.Clearly, inspecting lock files doesn’t then help much in debugging. To identify
the container which created the lock file, we need information about the
container in the lock file. Since Docker uses the container identifier or name
as the hostname, this information can be stored in the lock file in addition to
or instead of the PID.Use thecontent_templatekeyword argument toLockFileto specify a
custom lock file content format:>>> lock = zc.lockfile.LockFile('lock', content_template='{pid};{hostname}')
>>> lock.close()If you now inspected the lock file, you would see e.g.:$ cat lock123;myhostnameChange History3.0.post1 (2023-02-28)Addpython_requirestosetup.pyto prevent installing on not
supported old Python versions.3.0 (2023-02-23)Add support for Python 3.9, 3.10, 3.11.Drop support for Python 2.7, 3.5, 3.6.Drop support for deprecatedpython setup.py test.2.0 (2019-08-08)Extracted newSimpleLockFilethat removes implicit behavior
writing to the lock file, and instead allows a subclass to define
that behavior.
(#15)SimpleLockFileand thusLockFileare now new-style classes.
Any clients relying onLockFilebeing an old-style class will
need to be adapted.Drop support for Python 3.4.Add support for Python 3.8b3.1.4 (2018-11-12)Claim support for Python 3.6 and 3.7.Drop Python 2.6 and 3.3.1.3.0 (2018-04-23)Stop logging failure to acquire locks. Clients can do that if they wish.Claim support for Python 3.4 and 3.5.Drop Python 3.2 support because pip no longer supports it.1.2.1 (2016-06-19)Fixed: unlocking and locking didn’t work when a multiprocessing
process was running (and presumably other conditions).1.2.0 (2016-06-09)Added the ability to include the hostname in the lock file content.Code and ReST markup cosmetics.
[alecghica]1.1.0 (2013-02-12)Added Trove classifiers and made setup.py zest.releaser friendly.Added Python 3.2, 3.3 and PyPy 1.9 support.Removed Python 2.4 and Python 2.5 support.1.0.2 (2012-12-02)Fixed: the fix included in 1.0.1 caused multiple pids to be written
to the lock file1.0.1 (2012-11-30)Fixed: when there was lock contention, the pid in the lock file was
lost.Thanks to Daniel Moisset reporting the problem and providing a fix
with tests.Added test extra to declare test dependency onzope.testing.Using Python’sdoctestmodule instead of depreactedzope.testing.doctest.1.0.0 (2008-10-18)Fixed a small bug in error logging.1.0.0b1 (2007-07-18)Initial release |
zc.loggermonitor | The zc.loggermonitor package provides a zc.monitor plugin for getting
and setting logger levels.>>> import sys, zc.loggermonitorIt is an error to call the monitor without user arguments.>>> zc.loggermonitor.level(sys.stdout)
Traceback (most recent call last):
...
TypeError: level() takes at least 2 arguments (1 given)If you pass it a logger name, it returns the current effective level:>>> zc.loggermonitor.level(sys.stdout, '.')
NOTSET
>>> zc.loggermonitor.level(sys.stdout, 'mylogger')
NOTSETIf you pass a level it sets the level:>>> zc.loggermonitor.level(sys.stdout, '.', 'INFO')>>> zc.loggermonitor.level(sys.stdout, '.')
INFO
>>> zc.loggermonitor.level(sys.stdout, 'mylogger')
INFOYou can also pass a numeric value:>>> zc.loggermonitor.level(sys.stdout, 'mylogger', '5')
>>> zc.loggermonitor.level(sys.stdout, '.')
INFO
>>> zc.loggermonitor.level(sys.stdout, 'mylogger')
Level 5>>> zc.loggermonitor.level(sys.stdout, 'mylogger', '10')
>>> zc.loggermonitor.level(sys.stdout, '.')
INFO
>>> zc.loggermonitor.level(sys.stdout, 'mylogger')
DEBUG>>> zc.loggermonitor.level(sys.stdout, 'mylogger', 'NOTSET')
>>> zc.loggermonitor.level(sys.stdout, '.')
INFO
>>> zc.loggermonitor.level(sys.stdout, 'mylogger')
INFO>>> zc.loggermonitor.level(sys.stdout, '.', 'NOTSET')
>>> zc.loggermonitor.level(sys.stdout, '.')
NOTSET
>>> zc.loggermonitor.level(sys.stdout, 'mylogger')
NOTSETDownload |
zcloud | No description available on PyPI. |
zcls | Language:
🇺🇸🇨🇳«ZCls» is a classification model training/inferring frameworkSupported Recognizers:Refer toroadmapfor detailsTable of ContentsTable of ContentsBackgroundInstallationUsageMaintainersThanksContributingLicenseBackgroundIn the fields of object detection/object segmentation/action recognition, there have been many training frameworks with high integration and perfect process, such asfacebookresearch/detectron2,open-mmlab/mmaction2...Object classification is the most developed and theoretically basic field in deeplearning. Referring to the existing training framework, a training/inferring framework based on object classification model is implemented. I hope ZCls can bring you a better realization.InstallationSeeINSTALLUsageHow to train, seeGet Started with ZClsUse builtin datasets, seeUse Builtin DatasetsUse custom datasets, seeUse Custom DatasetsUse pretrained model, seeUse Pretrained ModelMaintainerszhujian -Initial work-zjykzjThanks@misc{ding2021diverse,
title={Diverse Branch Block: Building a Convolution as an Inception-like Unit},
author={Xiaohan Ding and Xiangyu Zhang and Jungong Han and Guiguang Ding},
year={2021},
eprint={2103.13425},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{ding2021repvgg,
title={RepVGG: Making VGG-style ConvNets Great Again},
author={Xiaohan Ding and Xiangyu Zhang and Ningning Ma and Jungong Han and Guiguang Ding and Jian Sun},
year={2021},
eprint={2101.03697},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{fan2020pyslowfast,
author = {Haoqi Fan and Yanghao Li and Bo Xiong and Wan-Yen Lo and
Christoph Feichtenhofer},
title = {PySlowFast},
howpublished = {\url{https://github.com/facebookresearch/slowfast}},
year = {2020}
}
@misc{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Hang Zhang and Chongruo Wu and Zhongyue Zhang and Yi Zhu and Haibin Lin and Zhi Zhang and Yue Sun and Tong He and Jonas Mueller and R. Manmatha and Mu Li and Alexander Smola},
year={2020},
eprint={2004.08955},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{han2020ghostnet,
title={GhostNet: More Features from Cheap Operations},
author={Kai Han and Yunhe Wang and Qi Tian and Jianyuan Guo and Chunjing Xu and Chang Xu},
year={2020},
eprint={1911.11907},
archivePrefix={arXiv},
primaryClass={cs.CV}
}For more thanks, checkTHANKSContributingAnyone's participation is welcome! Open anissueor submit PRs.Small note:Git submission specifications should be complied
withConventional CommitsIf versioned, please conform to theSemantic Versioning 2.0.0specificationIf editing the README, please conform to thestandard-readmespecification.LicenseApache License 2.0© 2020 zjykzj |
zcls2 | Language:
🇺🇸🇨🇳«ZCls2» is a more faster classification model training frameworkTable of ContentsTable of ContentsBackgroundInstallationUsageMaintainersThanksContributingLicenseBackgroundAfter nearly one and a half years of development,ZClshas integrated many training features, includes configuration module, register module, training module, and many model implementations (resnet/mobilenet/senet-sknet-resnest/acbnet-repvgg-dbbnet/ghostnet/gcnet...) and so on. In the development process, it is found that compared with the current excellent classification training framework, such asapex, the training speed ofZClsis not outstanding.In order to better improve the training speed, we decided to develop a new training frameworkZCls2, which is implemented based onapexand provides more friendly and powerful functions. In the preliminary implementation, it can be found thatZCls2improves the training speed by at least 50% compared withZCls. More functions are being added.InstallationSeeInstallUsageSeeGet startedMaintainerszhujian -Initial work-zjykzjThanksNVIDIA/apexZJCV/ZClsContributingAnyone's participation is welcome! Open anissueor submit PRs.Small note:Git submission specifications should be complied
withConventional CommitsIf versioned, please conform to theSemantic Versioning 2.0.0specificationIf editing the README, please conform to thestandard-readmespecification.LicenseApache License 2.0© 2022 zjykzj |
zCluster | zClusteris a package for measuring galaxy cluster photometric redshifts using
data from large public surveys. It can also produce photometric redshift estimates
and galaxy density maps for any point in the sky using the includedzFieldtool.Documentation:https://zcluster.readthedocs.ioLicense:GPL v3Authors:Matt Hilton, with contributions from Kabelo Kesebonye, Phumlani Phakathi,
Denisha Pillay, and Damien Ragavan (not all reflected on GitHub).Installation:pip install zClusterSupport:Please use theGitHub issues page,
and/or contactMatt Hilton.zClusterhas built-in support for querying large photometric surveys - currently:SDSS (DR7 - DR12)SDSS Stripe 82 (from SDSS DR7)CFHTLenSPS1 (DR2)DECaLS (DR8 - DR10)DES (DR1, DR2 and Y3 internal)KiDS (DR4)For details of the algorithm, its performance, and the output of the code, refer toHilton et al. (2018), which presents
results based on SDSS, S82, and CFHTLenS, and/orHilton et al. (2021), which presents
results based on DECaLS DR8. The other surveys listed above are work in progress (so use with caution; PS1 in
particular is problematic).Pillay et al. (2021)presents the first use of the package for producing projected galaxy density maps.If you findzClusteruseful in your work, please cite whichever one
of the above papers that you think is appropriate (together, of course, with the appropriate papers
for the optical/IR survey used).zClustercan also run on user-supplied .fits table photometric catalogs, provided that they have columns
namedID,RADeg,decDeg, and magnitude column names in the formu_MAG_AUTO,u_MAGERR_AUTOetc..zClusteris under active development, and not all documentation is up to date. The package also
contains some experimental features that are not necessarily well tested. |
zcm | ZCM is a lightweight component model using ZeroMQ (http://zeromq.org/)Components are the building blocks of an applicationComponents are characterized by ports and timers.Timers are bound to anoperationand fire periodicallyThere are four basic types of ports in ZCM: publisher, subscriber, client and serverPublishers publish messages and Subscribers receive messagesClients request the services of a server by sending a request message; Servers receive such requests, process the requests, and respond back to the Client. Until the Server responds, the Client port blocksA Component can be instantiated multiple times in an application with different port configurationsA component has a single operation queue that handles timer triggers and receives messagesA component has an executor thread that processes this operation queueComponents register functionality e.g. timer_operations, subscribers_operations etc.Component instances are grouped together into a process, calledActorAn actor receives a configuration (.JSON) file, that contains information regarding the components to instantiateThis configuration file also contains properties of all timers and ports contained by the component instances |
zc.mappingobject | Sometimes, you want to use a mapping object like a regular object.zc.mappingobjectprovides a wrapper for a mapping objects that
provides both attribute and item access.>>> import zc.mappingobject
>>> mapping = dict(a=1)
>>> ob = zc.mappingobject.mappingobject(mapping)>>> ob.a
1
>>> ob.a = 2
>>> ob.a
2
>>> mapping
{'a': 2}>>> list(ob)
['a']>>> len(ob)
1>>> ob['a'] = 3
>>> ob.a
3
>>> mapping
{'a': 3}>>> del ob.a
>>> mapping
{}
>>> ob.a
Traceback (most recent call last):
...
AttributeError: a>>> ob.b = 1
>>> mapping
{'b': 1}>>> del ob['b']
>>> mapping
{} |
zcmd | No description available on PyPI. |
zcmds | zcmdsCross platform(ish) productivity commands written in python. Tools for doing media manipulation through ffmpeg and AI. On Windows ls, rm and other common unix file commands are installed. Whenever there is something that doesn't work on Windows but does on Mac/Linux, I will apply a tool to make it work here. This toolset is ever-evolving and it's going to get insane in 2024 with all the AI that I'm now integrating.Install>pipinstallzcmds
>zcmds# shows all commands>diskaudit# audits the disk usage from the current directory.CommandsarchiveZips up the specified directory or file.askaiAsks a question to OpenAI from the terminal command. Requires an openai token which will be requested and saved on first use.Prefix your query with!to run command directly.aicodeA front end forAider, an AI pair programming tool. This is the future the sci fi writers promised you.audnormNormalizes audio in a media file to a standard volume.comportsShows all the ports that are in use at the current computer (useful for Arduino debugging).diskauditwalks the directory from the current directory and catalogs which of the child folders take up the most space.git-bash (win32)launches git-bash terminal (windows only).gitsummaryGenerates a summary of the git repository commits, useful for invoicingfindfilesfinds a file with the given glob.img2webpConversion tool for converting images into webp format.img2vidConverts a series of images to a video.obs_organizeorganizes the files in your default obs directory.merge-toMerges a clean git repo (no untracked files) to the target branch, pushes that target branch, then switches back to the original branch.newOpens a new terminal command window from the current terminal command window.printenvprints the current environment variables, including path. Everything is sortedpdf2pngConverts a pdf to a series of imagespdf2txtConverts a pdf to a text file.pushA safer way togit push, checks if the rebase is dirty.removbackgroundLaunches an AI tool in the browser to remove the background from an Image. Front end forrembgbackend.search_and_replaceSearch all the files from the current directory and applies exact text search and replace.search_in_filesSearch all files from current working directory for exact string matches matches.sharedirtakes the current folder and shares it via a reverse proxy using ngrok.stereo2monoReduces a stereo audio / video to a single mono track.sudo (win32 only)Runs a command as in sudo, using the gsudo tool.vidcatConcatenates two videos together, upscaling a lower resolution video.vidmuteStrips out the audio in a video file and saves it as a new file.vidinfoUses ffprobe to find the information from a video file.vid2gifA video is converted into an animated gif.vid2jpgA video is converted to a series of jpegs.vid2mp3A video is converted to an mp3.vid2mp4A video is converted to mp4. Useful for obs which saves everything as mkv. Extremely fast with mkv -> mp4 converstion.vidclipClips a video using timestamps.viddurGet's the during, use vidinfo instead.vidshrinkShrinks a video. Useful for social media posts.vidspeedChanges the speed of a video.vidvolChanges the volume of a video.ytclipDownload and clip a video from a url from youtube, rumble, bitchute, twitter... The timestamps are prompted by this program.trashSends the folder or files to the trash. This sometimes works better than deleting files on Windows.whichallFinds all the executables in the path.unzipunzip the provided filefixinternetAttempts to fix the internet connection by flushing the dns and resetting the network adapter.fixvmmem (win32 only)Fixes the vmmem consuming 100% cpu on windows 10 after hibernate.transcribe-anythingTranscribe media content using state of the art insanely-fast-whispertxEasily send files over the internet.tx README.mdFront end towomrhole send file, but gives you the code upfront so the client can auto connect.Install (dev):git clone https://github.com/zackees/zcmdscd zcmdspython -pip install -e .Test by typing inzcmdsAdditional installFor the pdf2image use:win32:choco install poppler... ?Note:Running tox will install hooks into the .tox directory. Keep this in my if you are developing.
TODO: Add a cleanup function to undo this.Release Notes1.4.64: Addsnewto open a new terminal command window from the current terminal command window.1.4.63: Addsgitconfigureto give sane defaults to your git.1.4.62:askaican now run commands by prefixing with!1.4.61: Fix bug intx1.4.60: New tooltx, a wrapper aroundwormhole sendbut easier to use.1.4.59: New toolpush, a safe way togit push1.4.58: Fixesaskaiwith positional args (asking a question and then immediatly exiting.)1.4.57: Bring in newzcmds_win32includesshpass1.4.56: Fixesaicodeon first run crash.1.4.54: Bring in newzcmds_win32fixes and improvements.1.4.53: Fixestranscribe-anythingwith python 3.11 for--device insane1.4.52: Updatetranscribe-anythingfor bug fix 2.7.231.4.51: Updatestranscribe-anythingto 2.7.221.4.50: Uses git-bash version of ssh for windows.1.4.49: Addstrashwhich sends files to the trash.1.4.48: Addsremovebackgroundwhich uses AI to remove a background image. Usesrembgbackend1.4.47: Addstranscribe-anythingto the command stack.1.4.46: Fixmerge-towith missing push step from target step.1.4.45: Adds new toolmerge-to, which streamlines merge a current branch into the other and then pushing.1.4.44: Fixes vidwebmaster (Qt6 pinned version just stopped working!!)1.4.43: Addsaicodewhich is the same asaskai --code1.4.42: Addsimgshrink1.4.41:aidernow installed withpipxto avoid package conflicts because of it's pinned deps.1.4.40: Fixaskaiin python 3.11 with linux.1.4.39:aideris now part of this command set. An awesome ai pair programmer. Enable it withaskai --code1.4.37:askainow streams output to the console.1.4.36:losslesscut(on windows) can now be executed on other drivers and doesn't block the current terminal.1.4.35:askainow assumed--fast. You can use gpt4 vs--slow1.4.34: Fixes geninvoice1.4.32: OpenAI now requires version 1.3.8 or higher (fixes breaking changes from OpenAI)1.4.31: Improveaudnormso that it uses sox instead offfmpeg-normalize. Fix bug where not all commands were installed. Fixes openai api changes.1.4.30: Fix error in diskaudit when no files found in protected dir.1.4.29: Fix img2webp.1.4.28: Bug fix1.4.27: askai now has--fast1.4.26: vid2jpg now has--no-open-folder1.4.24: Addsarchive1.4.23: Bump zcmds-win321.4.21:askaihandles pasting text that has double lines in it.1.4.20:askaiis now at gpt-41.4.19: Addslosslesscutfor win32.1.4.18: Fix win32zcmds_win321.4.17:vid2mp4now adds--nvencand--height--crf1.4.16: Fixesimg2webp.1.4.15: Addsimg2webputility.1.4.13: Add--no-fast-startto vidwebmaster.1.4.12: Fixes a bug in find files when an exception is thrown during file inspection.1.4.11:findfilesnow has --start --end --larger-than --smaller-then1.4.10:zcmdsnow usesget_cmds.pyto get all of the commands from the exe list.1.4.8:audnormnow encodes in mp3 format (improves compatibility). vid2mp3 now allows--normalize1.4.7: Fixes broken build.1.4.6: Addssaycommand to speak out the text you give the program1.4.5: Adds saved settings for gitsummary1.4.4: Addspdf2txtcommand1.4.3: Addsgitsummarycommand1.4.2: Bump up zcmds_win32 to 1.0.171.4.1: Adds 'whichall' command1.4.0: Askai now supports question-answer-question-... interactive mode1.3.17: Adds syntax highlighting to open askai tool1.3.16: Improves openai by using gpt 3.51.3.15: Improve vidinfo for more data and be a lot faster with single pass probing.1.3.14: Improve vidinfo to handle non existant streams and bad files.1.3.13: Addedimg2vidcommand.1.3.12: Addedfixinternetcommand.1.3.11: Fix badges.1.3.10: Suppress spurious warnings with chardet in openai1.3.9: Changes sound driver, should eliminate the runtime dependency on win32.1.3.8: Adds askai tool1.3.7: findfile -> findfiles1.3.6: zcmds[win32] is now at 1.0.2 (includesunzip)1.3.5: zcmds[win32] is now at 1.0.1 (includesnanoandpico)1.3.4: Addsprintenvutility1.3.3: Addsfindfileutility.1.3.2: Addscomportsto display all comports that are active on the computer.1.3.1: Nit improvement in search_and_replace to improve ui1.3.0: vidwebmaster now does variable rate encoding. --crf and --heights has been replaced by --encodings1.2.1: Adds improvements to vidhero for audio fade and makes vidclip improves usability1.2.0: stripaudio -> vidmute1.1.30: Improves vidinfo with less spam on the console and allows passing height list1.1.29: More improvements to vidinfo1.1.28: vidinfo now has more encoding information1.1.27: Fix issues with spaces in vidinfo1.1.26: Adds vidinfo1.1.26: Vidclip now supports start_time end_time being omitted.1.1.25: Even better performance of diskaudit. 50% reduction in execution time.1.1.24: Fixes diskaudit from double counting1.1.23: Fixes test_net_connection1.1.22: vid2mp4 - if file exists, try another name.1.1.21: Adds --fps option to vidshrink utility1.1.19: Using pyprojec.toml build system now.1.1.17: vidwebmaster fixes heights argument for other code path1.1.16: vidwebmaster fixes heights argument1.1.15: vidwebmaster fixed1.1.14: QT5 -> QT61.1.13: vidwebmaster fixes () bash-bug in linux1.1.12: vidwebmaster now has a gui if no file is supplied1.1.11: Adds vidlist1.1.10: Adds vidhero1.1.9: adds vidwebmaster1.1.8: adds vidmatrix to test out different settings.1.1.7: vidshrink and vidclip now both feature width argument1.1.6: Adds touch to win321.1.5: Adds unzip to win321.1.4: Fix home cmd.1.1.3: Fix up cmds so it returns int1.1.2: Fix git-bash on win321.1.1: ReleaseTODO:Add silence remover:https://github.com/bambax/RemsiAdd lossless cut to vidcliphttps://github.com/mifi/lossless-cut |
zcmds-win32 | zcmds_win32Optional zcmds package for win32 to make it feel more like a linux distribution. This is a great package to use if you want to
use things liketee,grepand unix commands and have it work on windows.Commandscatcpdugit-bashgrephomefalseidlsmd5summvnanopicopsopenrmtruetestteetouchunzipwhichwcxargsuniqunamefixvmmemIf CPU consumption for vmmem high, run this command to fix it.yesInstall (normal)python -m pip install zcmdsInstall (dev):git clone https://github.com/zackees/zcmds_win32cd zcmds_win32python -pip install -e .Release Notes1.2.19:python3andpip3now map topythonandpip1.2.18: Betterbash, which now paths the default git-bash `/usr/bin``1.2.17: Addeddate1.2.16:opennow accepts double back slashes.1.2.15: Addsrealpathandunameto git-bash utils.1.2.14: Fixesbashfrom missing file.1.2.13: Addsbashfrom git-bash anddirname.1.2.12:sshpassand nowzcmds_win32 --installto force re-download of unix tools.1.2.10:opennow handles git paths1.2.9: Fixssh-keygentrampoline and other new ssh commands.1.2.8: Improvedopencommand to now open extensionless text files1.2.7: Adds ssh related tools for windows from git-bash.1.2.4: Fixesopenwhen passing in a '.' directory.1.2.3:homenow works in non C: drive1.2.2: Addsmaketool for building code.1.2.1: Adds tooldig1.0.26: When sublime is opened viaopenit now opens in it's own window.1.0.25: Fixopenfor python 3.91.0.24: Addsed1.0.23: Yank 1.0.21/221.0.20: Addsuniqanduname1.0.19: Change default text editor to sublime over textpad1.0.18: Addstrueandfalseandtimeout1.0.17: Minor fixes.1.0.16: Addsxargs,ps,id,wc,md5sum,tee1.0.15: fixed 'no' command, which doesn't exist.1.0.13: Addsyes1.0.12: open tries to find a text editor.1.0.11: Addssudo_win32[sudo]1.0.10: Fixesfixvmmemwhich now uses elevated_exec1.0.9: Fixesopenwhen using forward slashes1.0.8: Fixesopenwhen usingopen .1.0.7: Fixes missingfixvmmem1.0.5:opennow assumes current directory if no path is given1.0.4:fixvmmemnow runs in elevated privledges1.0.3: Addsfixvmmem1.0.2: Addsunzip1.0.1: Addspico/nano1.0.0: Moved zcmds_win32 from zcmds |
zc.metarecipe | Buildout recipes provide reusable Python modules for common
configuration tasks. The most widely used recipes tend to provide
low-level functions, like installing eggs or software distributions,
creating configuration files, and so on. The normal recipe framework
is fairly well suited to building these general components.Full-blown applications may require many, often tens, of parts.
Defining the many parts that make up an application can be tedious and
often entails a lot of repetition. Buildout provides a number of
mechanisms to avoid repetition, including merging of configuration
files and macros, but these, while useful to an extent, don’t scale
very well. Buildout isn’t and shouldn’t be a programming language.Meta-recipes allow us to bring Python to bear to provide higher-level
abstractions for buildouts.A meta-recipe is a regular Python recipe that primarily operates by
creating parts. A meta recipe isn’t merely a high level recipe. It’s
a recipe that defers most of it’s work to lower-level recipe by
manipulating the buildout database.Unfortunately, buildout doesn’t yet provide a high-level API for
creating parts. It has a private low-level API which has been
promoted to public (meaning it won’t be broken by future release), and
it’s straightforward to write the needed high-level API, but it’s
annoying to repeat the high-level API in each meta recipe.This small package provides the high-level API needed for meta recipes
and a simple testing framework. It will be merged into a future
buildout release.Apresentation at PyCon 2011described early work with meta recipes.ContentsA simple meta-recipe exampleTestingChanges0.2.1 (2014-01-24)0.2.0 (2012-09-24)0.1.0 (2012-05-31)A simple meta-recipe exampleLet’s look at a fairly simple meta-recipe example. First, consider a
buildout configuration that builds a database deployment:[buildout]
parts = ctl pack
[deployment]
recipe = zc.recipe.deployment
name = ample
user = zope
[ctl]
recipe = zc.recipe.rhrc
deployment = deployment
chkconfig = 345 99 10
parts = main
[main]
recipe = zc.zodbrecipes:server
deployment = deployment
address = 8100
path = /var/databases/ample/main.fs
zeo.conf =
<zeo>
address ${:address}
</zeo>
%import zc.zlibstorage
<zlibstorage>
<filestorage>
path ${:path}
</filestorage>
</zlibstorage>
[pack]
recipe = zc.recipe.deployment:crontab
deployment = deployment
times = 1 2 * * 6
command = ${buildout:bin-directory}/zeopack -d3 -t00 ${main:address}This buildout doesn’t build software. Rather it builds configuration
for deploying a database configuration using already-deployed
software. For the purpose of this document, however, the details are
totally unimportant.Rather than crafting the configuration above every time, we can write
a meta-recipe that crafts it for us. We’ll use our meta-recipe as
follows:[buildout]
parts = ample
[ample]
recipe = com.example.ample:db
path = /var/databases/ample/main.fsThe idea here is that the meta recipe allows us to specify the minimal
information necessary. A meta-recipe often automates policies and
assumptions that are application and organization dependent. The
example above assumes, for example, that we want to pack to 3
days in the past on Saturdays.So now, let’s see the meta recipe that automates this:import zc.metarecipe
class Recipe(zc.metarecipe.Recipe):
def __init__(self, buildout, name, options):
super(Recipe, self).__init__(buildout, name, options)
self.parse('''
[deployment]
recipe = zc.recipe.deployment
name = %s
user = zope
''' % name)
self['main'] = dict(
recipe = 'zc.zodbrecipes:server',
deployment = 'deployment',
address = 8100,
path = options['path'],
**{
'zeo.conf': '''
<zeo>
address ${:address}
</zeo>
%import zc.zlibstorage
<zlibstorage>
<filestorage>
path ${:path}
</filestorage>
</zlibstorage>
'''}
)
self.parse('''
[pack]
recipe = zc.recipe.deployment:crontab
deployment = deployment
times = 1 2 * * 6
command =
${buildout:bin-directory}/zeopack -d3 -t00 ${main:address}
[ctl]
recipe = zc.recipe.rhrc
deployment = deployment
chkconfig = 345 99 10
parts = main
''')The meta recipe just adds parts to the buildout. It does this by
calling inherited __setitem__ andparsemethods. Theparsemethod just takes a string inConfigParsersyntax. It’s useful
when we want to add static, or nearly static part data. The setitem
syntax is useful when we have non-trivial computation for part data.The order that we add parts is important. When adding a part, any
string substitutions and other dependencies are evaluated, so the
referenced parts must be defined first. This is why, for example, thepackpart is added after themainpart.Note that the meta recipe supplied an integer for one of the
options. In addition to strings, it’s legal to supply integer and
unicode values.TestingNow, let’s test it. We’ll test it without actually running
buildout. Rather, we’ll use a faux buildout provided by the
zc.metarecipe.testing module.>>> import zc.metarecipe.testing
>>> buildout = zc.metarecipe.testing.Buildout()>>> _ = Recipe(buildout, 'ample', dict(path='/var/databases/ample/main.fs'))
[deployment]
name = ample
recipe = zc.recipe.deployment
user = zope
[main]
address = 8100
deployment = deployment
path = /var/databases/ample/main.fs
recipe = zc.zodbrecipes:server
zeo.conf = <zeo>
address ${:address}
</zeo>
<BLANKLINE>
%import zc.zlibstorage
<BLANKLINE>
<zlibstorage>
<filestorage>
path ${:path}
</filestorage>
</zlibstorage>
[ctl]
chkconfig = 345 99 10
deployment = deployment
parts = main
recipe = zc.recipe.rhrc
[pack]
command = ${buildout:bin-directory}/zeopack -d3 -t00 ${main:address}
deployment = deployment
recipe = zc.recipe.deployment:crontab
times = 1 2 * * 6When we call our recipe, it will add sections to the test buildout and
these are simply printed as added, so we can verify that the correct
data was generated.That’s pretty much it.Changes0.2.1 (2014-01-24)Fixed: When parsing configuration text, sections were input andevaluated at the same time in section sorted order. This
caused problems if a section that sorted early referred to a
section that sorted late.0.2.0 (2012-09-24)When setting option values, unicode and int values will be converted
to strings. Other non-string values are rejected. Previously, it
was easy to get errors from buildout when setting options with
values read from ZooKeeper trees, which are unicode due to the use
of JSON.Fixed: When using the meta-recipe parse method, the order that
resulting sections were added was non-deterministic, due to the
way ConfigParser works. Not sections are added to a buildout
in sortd order, by section name.0.1.0 (2012-05-31)Initial release |
zc.monitor | Monitor ServerThe monitor server is a server that provides a command-line interface to
request various bits of information. The server is zc.ngi based, so we can use
the zc.ngi testing infrastructure to demonstrate it.>>> import zc.ngi.testing
>>> import zc.monitor>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)The server supports an extensible set of commands. It looks up
commands as named zc.monitor.interfaces.IMonitorPlugin “utilities”, as defined
by the zope.component package.To see this, we’ll create a hello plugin:>>> def hello(connection, name='world'):
... """Say hello
...
... Provide a name if you're not the world.
... """
... connection.write("Hi %s, nice to meet ya!\n" % name)and register it:>>> zc.monitor.register(hello)When we register a command, we can provide a name. To see this, we’ll
registerhelloagain:>>> zc.monitor.register(hello, 'hi')Now we can give the hello command to the server:>>> connection.test_input('hi\n')
Hi world, nice to meet ya!
-> CLOSEWe can pass a name:>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('hello Jim\n')
Hi Jim, nice to meet ya!
-> CLOSEThe server comes with a few basic commands. Let’s register
them so we can see what they do. We’ll use the simplfied registration
interface:>>> zc.monitor.register_basics()The first is the help command. Giving help without input, gives a
list of available commands:>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('help\n')
Supported commands:
hello -- Say hello
help -- Get help about server commands
hi -- Say hello
interactive -- Turn on monitor's interactive mode
quit -- Quit the monitor
-> CLOSEWe can get detailed help by specifying a command name:>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('help help\n')
Help for help:
<BLANKLINE>
Get help about server commands
<BLANKLINE>
By default, a list of commands and summaries is printed. Provide
a command name to get detailed documentation for a command.
<BLANKLINE>
-> CLOSE>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('help hello\n')
Help for hello:
<BLANKLINE>
Say hello
<BLANKLINE>
Provide a name if you're not the world.
<BLANKLINE>
-> CLOSETheinteractivecommand switches the monitor into interactive mode. As
seen above, the monitor usually responds to a single command and then closes
the connection. In “interactive mode”, the connection is not closed until
thequitcommand is used. This can be useful when accessing the monitor
via telnet for diagnostics.>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('interactive\n')
Interactive mode on. Use "quit" To exit.
>>> connection.test_input('help interactive\n')
Help for interactive:
<BLANKLINE>
Turn on monitor's interactive mode
<BLANKLINE>
Normally, the monitor releases the connection after a single command.
By entering the interactive mode, the monitor will not end the connection
until you enter the "quit" command.
<BLANKLINE>
In interactive mode, an empty line repeats the last command.
<BLANKLINE>
>>> connection.test_input('help quit\n')
Help for quit:
<BLANKLINE>
Quit the monitor
<BLANKLINE>
This is only really useful in interactive mode (see the "interactive"
command).
<BLANKLINE>Notice that the result of the commands did not end with “-> CLOSE”, which would
have indicated a closed connection.Also notice that the interactive mode allows you to repeat commands.>>> connection.test_input('hello\n')
Hi world, nice to meet ya!
>>> connection.test_input('\n')
Hi world, nice to meet ya!
>>> connection.test_input('hello Jim\n')
Hi Jim, nice to meet ya!
>>> connection.test_input('\n')
Hi Jim, nice to meet ya!Now we will usequitto close the connection.>>> connection.test_input('quit\n')
Goodbye.
-> CLOSEFinally, it’s worth noting that exceptions will generate a
traceback on the connection.>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('hello Jim 42\n') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: hello() takes at most 2 arguments (3 given)
<BLANKLINE>
-> CLOSECommand loopsUsing the “MORE” mode, commands can signal that they want to claim all future
user input. We’ll implement a silly example to demonstrate how it works.Here’s a command that implements a calculator.>>> PROMPT = '.'
>>> def calc(connection, *args):
... if args and args[0] == 'quit':
... return zc.monitor.QUIT_MARKER
...
... if args:
... connection.write(str(eval(''.join(args))))
... connection.write('\n')
...
... connection.write(PROMPT)
... return zc.monitor.MORE_MARKERIf we register this command…>>> zc.monitor.register(calc)…we can invoke it and we get a prompt.>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)
>>> connection.test_input('calc\n')
.If we then give it more input we get the result plus another prompt.>>> connection.test_input('2+2\n')
4
.>>> connection.test_input('4*2\n')
8
.Once we’re done we can tell the calculator to let us go.>>> connection.test_input('quit\n')
-> CLOSEStart server>>> import time
>>> import zope.testing.loggingsupport, logging
>>> loghandler = zope.testing.loggingsupport.InstalledHandler(
... None, level=logging.INFO)>>> zc.monitor.start(9644)
('', 9644)>>> print loghandler
zc.ngi.async.server INFO
listening on ('', 9644)>>> zc.monitor.last_listener.close()
>>> zc.monitor.last_listener = None
>>> time.sleep(0.1)>>> loghandler.clear()>>> zc.monitor.start(('127.0.0.1', 9644))
('127.0.0.1', 9644)>>> print loghandler
zc.ngi.async.server INFO
listening on ('127.0.0.1', 9644)>>> zc.monitor.last_listener.close()
>>> zc.monitor.last_listener = None
>>> time.sleep(0.1)Bind to port 0:>>> addr = zc.monitor.start(0)
>>> addr == zc.monitor.last_listener.address
True>>> zc.monitor.last_listener.close()
>>> zc.monitor.last_listener = None
>>> time.sleep(0.1)Trying to rebind to a port in use:>>> loghandler.clear()>>> zc.monitor.start(('127.0.0.1', 9644))
('127.0.0.1', 9644)>>> zc.monitor.start(('127.0.0.1', 9644))
False>>> print loghandler
zc.ngi.async.server INFO
listening on ('127.0.0.1', 9644)
zc.ngi.async.server WARNING
unable to listen on ('127.0.0.1', 9644)
root WARNING
unable to start zc.monitor server because the address ('127.0.0.1', 9644) is in use.>>> zc.monitor.last_listener.close()
>>> zc.monitor.last_listener = None
>>> time.sleep(0.1)>>> loghandler.uninstall()Change History0.4.0.post1 (2019-12-06)Fix change log on PyPI.0.4.0 (2019-12-06)Use new Python 2.6/3.x compatible exception syntax. (This does not mean that
this package is already Python 3 compatible.)0.3.1 (2012-04-27)When binding the monitor to a Unix-domain socket, remove an existing
socket at the same path so the bind is successful. This may affect
existing usage with respect to zopectl debug behavior, but will be
more predictable.0.3.0 (2011-12-12)Added a simplified registration interface.0.2.1 (2011-12-10)Added anaddressoption tostartto be able to specify an adapter
to bind to.startnow returns the address being listened on, which is useful
when binding to port 0.Using Python’sdoctestmodule instead of depreactedzope.testing.doctest.0.2.0 (2009-10-28)Add the “MORE” mode so commands can co-opt user interaction0.1.2 (2008-09-15)Bugfix: The z3monitor server lacked a handle_close method, which
caused errors to get logged when users closed connections before
giving commands.0.1.1 (2008-09-14)Bugfix: fixed and added test for regression in displaying tracebacks.0.1.0 (2008-09-14)Initial release |
zc.monitorcache | zc.montorcache is a zc.z3monitor plugin that allows one to modify or check
the cache size (in objects or bytes) of a running instance.>>> import zc.monitorcache
>>> import zope.component
>>> import zc.ngi.testing
>>> import zc.monitor
>>> import zc.monitor.interfaces
>>> import zc.z3monitor
>>> import zc.z3monitor.interfaces>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> zope.component.provideUtility(zc.monitorcache.cacheMonitor,
... zc.z3monitor.interfaces.IZ3MonitorPlugin, 'cache_size')>>> connection.test_input('cache_size\n')
-> CLOSEWe have no databases right now. Let’s add a few so that we can test.>>> import ZODB.tests.util
>>> import ZODB.interfaces
>>> main = ZODB.tests.util.DB()
>>> zope.component.provideUtility(main, ZODB.interfaces.IDatabase)
>>> test = ZODB.tests.util.DB()
>>> zope.component.provideUtility(
... test, ZODB.interfaces.IDatabase, 'test')Now we should get information on each of the database’s cache sizes>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> connection.test_input('cache_size\n')
DB cache sizes for main
Max objects: 400
Max object size bytes: 0MB
DB cache sizes for test
Max objects: 400
Max object size bytes: 0MB
-> CLOSEWe can request information about a specific db as well>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> connection.test_input('cache_size -\n')
DB cache sizes for main
Max objects: 400
Max object size bytes: 0MB
-> CLOSE>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> connection.test_input('cache_size test\n')
DB cache sizes for test
Max objects: 400
Max object size bytes: 0MB
-> CLOSEWe can also modify cache sizes for a specific db>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> connection.test_input('cache_size test 300\n')
Set max objects to 300
-> CLOSE>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> connection.test_input('cache_size test 10MB\n')
Set max object size bytes to 10MB
-> CLOSE>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)>>> connection.test_input('cache_size test\n')
DB cache sizes for test
Max objects: 300
Max object size bytes: 10MB
-> CLOSE |
zc.monitorlogstats | zc.monitorlogstats provides a zc.z3monitor plugin and log handler to
track log statistics. The idea is that you can conect to it to find
out how many log entries of various types have been posted. If you
sample it over time, youcan see how many entries are added. In
particular, if you get new warning, error, or critical entries,
someone might want to look at the logs to find out what’s going on.Counting Log HandlerLet’s start by looking at the log handler. The factory
zc.monitorlogstats.CountingHandler can be installed like any other
handler. It doesn’t emit anything. It just counts.Let’s create one to see how it works:>>> import logging, zc.monitorlogstats
>>> handler = zc.monitorlogstats.CountingHandler()
>>> logging.getLogger().addHandler(handler)
>>> logging.getLogger().setLevel(logging.INFO)Now, let’s log:>>> for i in range(5):
... logging.getLogger('foo').critical('Yipes')>>> for i in range(9):
... logging.getLogger('bar').error('oops')>>> for i in range(12):
... logging.getLogger('baz').warn('hm')>>> for i in range(21):
... logging.getLogger('foo').info('yawn')>>> for i in range(99):
... logging.getLogger('xxx').log(5, 'yuck yuck')We can ask the handler for statistics:>>> handler.start_time
datetime.datetime(2008, 9, 5, 21, 10, 14)>>> for level, count, message in handler.statistics:
... print level, count
... print `message`
20 21
'yawn'
30 12
'hm'
40 9
'oops'
50 5
'Yipes'The statistics consist of the log level, the count of log messages,
and the formatted text of last message.We can also ask it to clear it’s statistics:>>> handler.clear()
>>> for i in range(3):
... logging.getLogger('foo').critical('Eek')>>> handler.start_time
datetime.datetime(2008, 9, 5, 21, 10, 15)>>> for level, count, message in handler.statistics:
... print level, count
... print `message`
50 3
'Eek'There’s ZConfig support for defining counting handlers:>>> import ZConfig, StringIO
>>> schema = ZConfig.loadSchemaFile(StringIO.StringIO("""
... <schema>
... <import package="ZConfig.components.logger"/>
... <multisection type="logger" attribute="loggers" name="*" required="no">
... </multisection>
... </schema>
... """))>>> conf, _ = ZConfig.loadConfigFile(schema, StringIO.StringIO("""
... %import zc.monitorlogstats
... <logger>
... name test
... level INFO
... <counter>
... format %(name)s %(message)s
... </counter>
... </logger>
... """))>>> testhandler = conf.loggers[0]().handlers[0]>>> for i in range(2):
... logging.getLogger('test').critical('Waaa')
>>> for i in range(22):
... logging.getLogger('test.foo').info('Zzzzz')>>> for level, count, message in handler.statistics:
... print level, count
... print `message`
20 22
'Zzzzz'
50 5
'Waaa'>>> for level, count, message in testhandler.statistics:
... print level, count
... print `message`
20 22
'test.foo Zzzzz'
50 2
'test Waaa'Note that the message output from the test handler reflects the format
we used when we set it up.The example above illustrates that you can install as many counting
handlers as you want to.Monitor PluginThe zc.monitorlogstats Monitor plugin can be used to query log statistics.>>> import sys
>>> plugin = zc.monitorlogstats.monitor(sys.stdout)
2008-09-05T21:10:15
20 22 'Zzzzz'
50 5 'Waaa'The output consists of the start time and line for each log level for
which there are statistics. Each statistics line has the log level,
entry count, and a repr of the last log message.By default, the root logger will be used. You can specify a logger name:>>> plugin = zc.monitorlogstats.monitor(sys.stdout, 'test')
2008-09-05T21:10:16
20 22 'test.foo Zzzzz'
50 2 'test Waaa'You can use ‘.’ for the root logger:>>> plugin = zc.monitorlogstats.monitor(sys.stdout, '.')
2008-09-05T21:10:15
20 22 'Zzzzz'
50 5 'Waaa'Note that if there are multiple counting handlers for a logger, only
the first will be used. (So don’t define more than one. :)It is an error to name a logger without a counting handler:>>> plugin = zc.monitorlogstats.monitor(sys.stdout, 'test.foo')
Traceback (most recent call last):
...
ValueError: Invalid logger name: test.fooYou can specify a second argument with a value of ‘clear’, ro clear
statistics:>>> plugin = zc.monitorlogstats.monitor(sys.stdout, 'test', 'clear')
2008-09-05T21:10:16
20 22 'test.foo Zzzzz'
50 2 'test Waaa'>>> plugin = zc.monitorlogstats.monitor(sys.stdout, 'test', 'clear')
2008-09-05T21:10:17Download |
zc.monitorpdb | zc.montorpdb is a small plugin for the (very) lightweight zc.monitor
system. It allows a user to telnet to a monitor port and invoke a
Python debugger (PDB) prompt.To use it, one must first register the command so zc.monitor is aware of
it.>>> import zc.monitorpdb
>>> import zope.component
>>> import zc.monitor.interfaces
>>> zope.component.provideUtility(zc.monitorpdb.command,
... zc.monitor.interfaces.IMonitorPlugin, 'pdb')Since zc.monitor is implemented with zc.ngi, we can use zc.ngi’s testing
helpers.>>> import zc.ngi.testing
>>> connection = zc.ngi.testing.TextConnection()
>>> server = zc.monitor.Server(connection)If we invoke the command, we’ll get the appropriate prompt.>>> connection.test_input('pdb\n')
(Pdb)Now we can do normal pdb things like list the code being executed.>>> connection.test_input('l\n')
34 global fakeout
35
36 fakeout = FakeStdout(connection.connection)
37 debugger = pdb.Pdb(stdin=None, stdout=fakeout)
38 debugger.reset()
39 -> debugger.setup(sys._getframe(), None)
40
41
42 def command(connection, *args):
43 global debugger
44 global fakeout
(Pdb)As well as go “up” in the function call stack.>>> connection.test_input('u\n')
> /graphted-storage/workspace/zc.monitorpdb/src/zc/monitorpdb/__init__.py(48)command()
-> reset(connection)
(Pdb)There is a “reset” command that gives us a fresh debugger (just in case
something bad happend to ours and we don’t want to restart the host
process). Here we go from the current location being one thing (the
result of the previous “u” command) to another.>>> connection.test_input('l\n')
57 return zc.monitor.QUIT_MARKER
58 else:
59 debugger.onecmd(' '.join(args))
60
61 connection.write(debugger.prompt)
62 -> return zc.monitor.MORE_MARKER
[EOF]
(Pdb)
>>> connection.test_input('reset\n')
(Pdb)
>>> connection.test_input('l\n')
34 global fakeout
35
36 fakeout = FakeStdout(connection.connection)
37 debugger = pdb.Pdb(stdin=None, stdout=fakeout)
38 debugger.reset()
39 -> debugger.setup(sys._getframe(), None)
40
41
42 def command(connection, *args):
43 global debugger
44 global fakeout
(Pdb)Some features don’t work, however.>>> connection.test_input('debug 1+1\n')
the "debug" command is not supported
(Pdb)Once we’re done, we ask to be let go.>>> connection.test_input('quit\n')
-> CLOSE |
zcms | ================================zcms : 基于文件系统的超轻CMS================================zcms是一个极简的基于文件系统CMS(类Jekyll),都是你熟悉的:- 无需数据库, 每个页面是一个文本文件(rst/md)- 扩展reStructuredText指令(.rst),轻松实现博客、导航、新闻等动态内容示例站点:- http://viewer.everydo.com- http://developer.everydo.com- http://everydo.com- http://edodocs.com运行自带的demo站点(8000端口访问):docker run -d -p 8000:80 panjunyong/zcms运行自己位于/home/panjy/sites的站点::docker run -d -v /home/panjy/sites:/var/sites -p 8000:80 panjunyong/zcms调试站点皮肤(即时刷新,但是运行速度较慢):docker run -d -v /home/panjy/sites:/var/sites -p 8000:80 panjunyong/zcms debug如有反馈,请微博联系: http://weibo.com/panjunyong无阻力建站============================站点放在sites文件夹内容,每个站点包括内容(contents)和皮肤(themes)设置栏目顺序和标题-----------------------每个文件夹下,可以放置一个 `_config.yaml` 的文件,在这里设置文件夹的属性:title: 教程 # 标题order: [index.rst, tour, blog, about.rst] # 显示顺序exclude: [img] # 隐藏图片文件夹的显示对于rst/md的页面文件, 可直接在文件头部指定这些信息:---title: 教程 # 标题creator: 潘俊勇 # 创建人created: 2010-12-12 9:12 # 创建时间,新闻根据这个时间排序---页面文件的属性,必须以三个短横开始和结束设置左右列以及头部区域--------------------------对整个文件夹下的页面模版,可以定制左侧、右侧和头部的显示信息,分别加入: `_left.rst` , `_right.rst` , `_upper.rst`如果具体某个页面,需要定制,也可以单独设置,通过命名来区分:1. index.rst 页面的头部信息 `_upper_index.rst`2. about.rst 页面的左侧信息 `_left_about.rst`动态内容-------------可在reST中使用如下指令即可:1. 最近新闻.. news:::size: 5:path: blog2. 博客页面.. blogs:::size: 203. 导航树.. navtree:::root_depth: 2外观模版的设置---------------------在站点根文件夹下面的_config.yaml里面,定义了整个站点的皮肤theme_base: http://localhost:6543/themes/bootstrap # 存放模版的基准位置,这里可能存放了多个模版theme: default.html # 默认的模版外观模版是通过一个网址来指定的,上面的完整外观模版地址是:http://localhost:6543/themes/bootstrap/default.html如果不想使用默认的外观模版,可文件夹或页面属性中,设置个性化的外观模版:theme: home.html # 首页模版,可能没有左右列这里会使用外观模版:http://localhost:6543/themes/bootstrap/home.html制作外观模版-----------------可看看themes文件夹里面的文件,其实就是一个python的String Template.一个最基础的外观模版可以是:<html><head><title>$title - $site_title</title><meta name="Description" content="$site_description"/></head><body><ul>$nav</ul><div>$upper</div><table><tr><td>$left</td><td>$content</td><td>$right</td></tr></table></body></html>这个文件里面可以包括如下变量:- `site_title` : 站点的标题- `site_description` : 当前内容的描述信息- `nav` : 站点的导航栏目- `title` : 当前内容的标题- `description` : 当前内容的描述信息- `content` : 当前内容正文- `left` : 左侧列显示的内容- `right` : 右侧列显示的内容- `upper` : 上方区域显示的内容- `theme_base` : 外观模版的所在的位置虚拟主机设置-----------------在站点根文件夹下面的_config.yaml里面,定义了整个站点的虚拟主机设置:domain_name: domain.com, www.domain.com # 域名这表示,可以通过上述 `domain_name` 直接访问站点,url路径上可省略 `site_name`更新缓存===================默认系统会自动对theme进行缓存,最近更新等内容是每天刷新一次。可调用如下地址,手动进行即时刷新:1. 更新皮肤: `http://server.com/clear_theme_cache`2. 更新内容: `http://server.com/clear_content_cache`开发调试代码===================使用本地代码(/home/panjy/git/zcms):docker run -t -i -v /home/panjy/git/zcms:/opt/zcms/ -p 8000:80 panjunyong/zcms shellbin/buildoutbin/pserve development.iniJekyll参考===================- http://www.ruanyifeng.com/blog/2012/08/blogging_with_jekyll.html- http://yanping.me/cn/blog/2012/03/18/github-pages-step-by-step/- http://www.soimort.org/posts/101/TODO================1. 优化默认的bootstrap风格皮肤2. 简化虚拟主机的配置:- 合并nginx和zcms这2个docker- 各个站点部署方面的配置转到站点的 `_config.py` 中- 自动生成nginx的配置文件3. production模式下,应该大量缓存加速,减少io4. 提供webdav api5. 提供RSS输出CHANGESv1.2 - 2014.2.27- 默认生成html5风格的html- 支持docker方式的运行- 简化VHM的配置,只需要在nginx上设置即可,无需调整配置文件v1.0 - 2013.1.1- 借鉴Jekyll,简化配置- 大量简化从前的历史代码v0.5 - 2012.12.30- 去除wsgi Theme Filter, 简化- 去除对themes文件夹的依赖,在站点metadata.json中可设置theme_url里面是皮肤的url地址,默认是自带的bootstrap风格皮肤- 支持markdownv0.1 - 2012.12.14- 调整.json的位置,去除多余的文件夹- 调整.json的内容,简化 |
zc.ngi | Network Gateway InterfaceThe Network Gateway Interface provides:the ability to test application networking code without use of
sockets, threads or subprocessesclean separation of application code and low-level networking codea fairly simple inheritence free set of networking APIsan event-based framework that makes it easy to handle many
simultaneous connections while still supporting an imperative
programming style.To learn more, seehttp://packages.python.org/zc.ngi/Changelog2.1.0 (2017-08-31)New features:support IPv62.0.1 (2012-04-06)Bugs FixedSending data faster than a socket could transmit it wasn’t handled
correctly.2.0.0 (2011-12-10)Bugs Fixedzc.ngi.async listeners didn’t provide the real address when binding
to port 0.2.0.0a6 (2011-05-26)Bugs FixedIf application code made many small writes, each write was sent
individually, which could trigger Nagle’s algorithm.2.0.0a5 (2010-08-19)New Features:Connection objects have a new peer_address attribute, which is
equivilent to callinggetpeername()on sockets.Bugs Fixed:Servers using unix-domain sockets didn’t clean up socket files.When testing listeners were closed, handle_close, rather than close,
was called on server connections.The zc.ngi.async connections’writeandwritelinesmethods
didn’t raise errors when called on closed connections.The built-in connection adapters and handy adapter base class
didn’t implement __nonzero__.2.0.0a4 (2010-07-27)Bugs Fixed:When using zc.ngi.testing and a server sent input and closed a
connection before set_handler was called on the client, the input
sent by the server was lost.By default, calling close on a connection could caause already
written data not to be sent. Now, don’t close connections until
data passed to write or writelines as, at least, been passed to the
underlying IO system (e.g. socket.send).(This means the undocumented practive of sending zc.ngi.END_OF_DATA
to write is now deprecated.)2.0.0a3 (2010-07-22)Bugs Fixed:Fixed a packaging bug.2.0.0a2 (2010-07-22)New Features:There’s a new experimental zc.ngi.async.Implementation.listener
option to run each client (server connection) in it’s own thread.(It’s not documented. It’s experimental, but there is a doctest.)Bugs Fixed:There was a bug in handling connecting to testing servers that
caused printing handlers to be used when they shouldn’t have been.2.0.0a1 (2010-07-08)New Features:New improved documentationSupport for writing request handlers in an imperative style using
generators.Cleaner testing interfacesRefactoredzc.ngi.asyncthread management to make the blocking
APIs unnecessary.zc.ngi.async.blockingis now deprecated.Added support for running multipleasyncimplementations in
separate threads. This is useful in applications with fewer network
connections and with handlers that tend to perform long-lating
computations that would be unacceptable with a single select loop.Renamed IConnection.setHandler to set_handler.Dropped support for Python 2.4.Bugs Fixed:TheSizedrequest adapter’swritelinesmethod was broken.There we a number of problems with error handling in theasyncimplementation.1.1.6 (2010-03-01)Bug fixed:Fixed bad logging oflistening on .... The message was emitted
before the actual operation was successful. Emits now a warningunable to listenon...if binding to the given address fails.1.1.5 (2010-01-19)Bug fixed:Fixed a fatal win32 problem (socket.AF_UNIX usage).Removed impropper use of the SO_REUSEADDR socket option on windows.The sized adapter performed poorly (because it triggered Nagle’s
algorithm).1.1.4 (2009-10-28)Bug fixed:Spurious warnings sometimes occurred due to a race condition in
setting up servers.Added missing “writelines” method to zc.ngi.adapters.Lines.1.1.3 (2009-07-30)Bug fixed:zc.ngi.async bind failures weren’t handled properly, causing lots of
annoying log messages to get spewed, which tesnded to fill up log
files.1.1.2 (2009-07-02)Bugs fixed:The zc.ngi.async thread wasn’t named. All threads should be named.1.1.1 (2009-06-29)Bugs fixed:zc.ngi.blocking didn’t properly handle connection failures.1.1.0 (2009-05-26)Bugs fixed:Blocking input and output files didn’t properly synchronize closing.The testing implementation made muiltiple simultaneous calls to
handler methods in violation of the promise made in interfaces.py.Async TCP servers used too low a listen depth, causing performance
issues and spurious test failures.New features:Added UDP support.Implementation responsibilities were clarified through an
IImplementation interface. The “connector” attribute of the testing
and async implementations was renamed to “connect”. The old name
still works.Implementations are now required to log handler errors and to close
connections in response to connection-handler errors. (Otherwise,
handlers, and especially handler adapters, would have to do this.)1.0.1 (2007-05-30)Bugs fixed:Server startups sometimes failed with an error like:warning: unhandled read event
warning: unhandled write event
warning: unhandled read event
warning: unhandled write event
------
2007-05-30T22:22:43 ERROR zc.ngi.async.server listener error
Traceback (most recent call last):
File "asyncore.py", line 69, in read
obj.handle_read_event()
File "asyncore.py", line 385, in handle_read_event
self.handle_accept()
File "/zc/ngi/async.py", line 325, in handle_accept
sock, addr = self.accept()
TypeError: unpack non-sequence |
zc.objectlog | The objectlog package provides a customizable log for a single object. The
system was designed to provide information for a visual log of important
object changes and to provide analyzable information for metrics.It provides automatic recording for each log entry of a timestamp and the
principals in the request when the log was made.Given a schema of data to collect about the object, it automatically
calculates and stores changesets from the last log entry, primarily to
provide a quick and easy answer to the question “what changed?” and
secondarily to reduce database size.It accepts optional summary and detail values that allow the system or users
to annotate the entries with human-readable messages.It allows each log entry to be annotated with zero or more marker interfaces
so that log entries may be classified with an interface.Moreover, the log entries can be set to occur at transition boundaries, and
to only ocur if a change was made (according to the changeset) since the last
log entry.To show this, we need to set up a dummy interaction. We do this below, then
create an object with a log, then actually make a log.>>> import zope.security.management
>>> import zope.security.interfaces
>>> import zope.app.security.interfaces
>>> from zope import interface, schema
>>> from zope.app.testing import ztapi
>>> class DummyPrincipal(object):
... interface.implements(zope.security.interfaces.IPrincipal)
... def __init__(self, id, title, description):
... self.id = unicode(id)
... self.title = title
... self.description = description
...
>>> alice = DummyPrincipal('alice', 'Alice Aal', 'first principal')
>>> betty = DummyPrincipal('betty', 'Betty Barnes', 'second principal')
>>> cathy = DummyPrincipal('cathy', 'Cathy Camero', 'third principal')
>>> class DummyParticipation(object):
... interface.implements(zope.security.interfaces.IParticipation)
... interaction = principal = None
... def __init__(self, principal):
... self.principal = principal
...
>>> import zope.publisher.interfaces>>> import zc.objectlog
>>> import zope.location
>>> WORKING = u"Where I'm working"
>>> COUCH = u"On couch"
>>> BED = u"On bed"
>>> KITCHEN = u"In kitchen"
>>> class ICat(interface.Interface):
... name = schema.TextLine(title=u"Name", required=True)
... location = schema.Choice(
... (WORKING, COUCH, BED, KITCHEN),
... title=u"Location", required=False)
... weight = schema.Int(title=u"Weight in Pounds", required=True)
... getAge, = schema.accessors(
... schema.Int(title=u"Age in Years", readonly=True,
... required=False))
...
>>> import persistent
>>> class Cat(persistent.Persistent):
... interface.implements(ICat)
... def __init__(self, name, weight, age, location=None):
... self.name = name
... self.weight = weight
... self.location = location
... self._age = age
... self.log = zc.objectlog.Log(ICat)
... zope.location.locate(self.log, self, 'log')
... def getAge(self):
... return self._age
...Notice in the __init__ for cat that we located the log on the cat. This is
an important step, as it enables the automatic changesets.Now we are set up to look at examples. With one exception, each example
runs within a faux interaction so we can see how the principal_ids
attribute works. First we’ll see that len works, that the record_schema
attribute is set properly, that the timestamp uses a pytz.utc timezone for
the timestamp, that log iteration works, and that summary, details, and data
were set properly.>>> import pytz, datetime
>>> a_p = DummyParticipation(alice)
>>> interface.directlyProvides(a_p, zope.publisher.interfaces.IRequest)
>>> zope.security.management.newInteraction(a_p)
>>> emily = Cat(u'Emily', 16, 5, WORKING)
>>> len(emily.log)
0
>>> emily.log.record_schema is ICat
True
>>> before = datetime.datetime.now(pytz.utc)
>>> entry = emily.log(
... u'Starting to keep track of Emily',
... u'Looks like\nshe might go upstairs soon')
>>> entry is emily.log[0]
True
>>> after = datetime.datetime.now(pytz.utc)
>>> len(emily.log)
1
>>> before <= entry.timestamp <= after
True
>>> entry.timestamp.tzinfo is pytz.utc
True
>>> entry.principal_ids
(u'alice',)
>>> list(emily.log) == [entry]
True
>>> entry.record_schema is ICat
True
>>> entry.summary
u'Starting to keep track of Emily'
>>> entry.details
u'Looks like\nshe might go upstairs soon'The record and the record_changes should have a full set of values from the
object. The record has a special security checker that allows users to
access any field defined on the schema, but not to access any others nor to
write any values.>>> record = emily.log[0].record
>>> record.name
u'Emily'
>>> record.location==WORKING
True
>>> record.weight
16
>>> record.getAge()
5
>>> ICat.providedBy(record)
True
>>> emily.log[0].record_changes == {
... 'name': u'Emily', 'weight': 16, 'location': u"Where I'm working",
... 'getAge': 5}
True
>>> from zope.security.checker import ProxyFactory
>>> proxrecord = ProxyFactory(record)
>>> ICat.providedBy(proxrecord)
True
>>> from zc.objectlog import interfaces
>>> interfaces.IRecord.providedBy(proxrecord)
True
>>> from zope.security import canAccess, canWrite
>>> canAccess(record, 'name')
True
>>> canAccess(record, 'weight')
True
>>> canAccess(record, 'location')
True
>>> canAccess(record, 'getAge')
True
>>> canAccess(record, 'shazbot') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ForbiddenAttribute: ('shazbot', ...
>>> canWrite(record, 'name')
False
>>> zope.security.management.endInteraction()Interactions with multiple principals are correctly recorded as well. Note
that non-request participations are not included in the records. We also
look a bit more at the record and the change set.>>> a_p = DummyParticipation(alice)
>>> b_p = DummyParticipation(betty)
>>> c_p = DummyParticipation(cathy)
>>> interface.directlyProvides(a_p, zope.publisher.interfaces.IRequest)
>>> interface.directlyProvides(b_p, zope.publisher.interfaces.IRequest)
>>> zope.security.management.newInteraction(a_p, b_p, c_p)
>>> emily.location = KITCHEN
>>> entry = emily.log(u"Sounds like she's eating", u"Dry food,\nin fact.")
>>> len(emily.log)
2
>>> emily.log[0].summary
u'Starting to keep track of Emily'
>>> emily.log[1].summary
u"Sounds like she's eating"
>>> after <= emily.log[1].timestamp <= datetime.datetime.now(pytz.utc)
True
>>> emily.log[1].principal_ids # cathy was not a request, so not included
(u'alice', u'betty')
>>> emily.log[1].details
u'Dry food,\nin fact.'
>>> emily.log[1].record_changes
{'location': u'In kitchen'}
>>> record = emily.log[1].record
>>> record.location
u'In kitchen'
>>> record.name
u'Emily'
>>> record.weight
16
>>> zope.security.management.endInteraction()It is possible to make a log without an interaction as well.>>> emily._age = 6
>>> entry = emily.log(u'Happy Birthday') # no interaction
>>> len(emily.log)
3
>>> emily.log[2].principal_ids
()
>>> emily.log[2].record_changes
{'getAge': 6}
>>> record = emily.log[2].record
>>> record.location
u'In kitchen'
>>> record.name
u'Emily'
>>> record.weight
16
>>> record.getAge()
6Entries may be marked with marker interfaces to categorize them. This approach
may be difficult with security proxies, so it may be changed. We’ll do all
the rest of our examples within the same interaction.>>> c_p = DummyParticipation(cathy)
>>> interface.directlyProvides(c_p, zope.publisher.interfaces.IRequest)
>>> zope.security.management.newInteraction(c_p)
>>> emily.location = None
>>> emily.weight = 17
>>> class IImportantLogEntry(interface.Interface):
... "A marker interface for log entries"
>>> interface.directlyProvides(
... emily.log(u'Emily is in transit...and ate a bit too much'),
... IImportantLogEntry)
>>> len(emily.log)
4
>>> [e for e in emily.log if IImportantLogEntry.providedBy(e)] == [
... emily.log[3]]
True
>>> emily.log[3].principal_ids
(u'cathy',)
>>> emily.log[3].record_changes=={'weight': 17, 'location': None}
True
>>> record = emily.log[3].record
>>> old_record = emily.log[2].record
>>> record.name == old_record.name == u'Emily'
True
>>> record.weight
17
>>> old_record.weight
16
>>> record.location # None
>>> old_record.location
u'In kitchen'Making a log will fail if the record it is trying to make does not conform
to its schema.>>> emily.location = u'Outside'
>>> emily.log(u'This should never happen')
Traceback (most recent call last):
...
ConstraintNotSatisfied: Outside
>>> len(emily.log)
4
>>> emily.location = BEDIt will also fail if the arguments passed to it are not correct.>>> emily.log("This isn't unicode so will not succeed")
Traceback (most recent call last):
...
WrongType: ("This isn't unicode so will not succeed", <type 'unicode'>)
>>> len(emily.log)
4
>>> success = emily.log(u"Yay, unicode")The following is commented out until we have more# >>> emily.log(u”Data without an interface won’t work”, None, ‘boo hoo’)
Traceback (most recent call last):
…
WrongContainedType: []Zero or more additional arbitrary data objects may be included on the log entry
as long as they implement an interface.>>> class IConsumableRecord(interface.Interface):
... dry_food = schema.Int(
... title=u"Dry found consumed in ounces", required=False)
... wet_food = schema.Int(
... title=u"Wet food consumed in ounces", required=False)
... water = schema.Int(
... title=u"Water consumed in teaspoons", required=False)
...# >>> class ConsumableRecord(object):
… interface.implements(IConsumableRecord)
… def __init__(self, dry_food=None, wet_food=None, water=None):
… self.dry_food = dry_food
… self.wet_food = wet_food
… self.water = water
…
# >>> entry = emily.log(u’Collected eating records’, None, ConsumableRecord(1))
# >>> len(emily.log)
5
# >>> len(emily.log[4].data)
1
# >>> IConsumableRecord.providedBy(emily.log[4].data[0])
True
# >>> emily.log[4].data[0].dry_food
1__getitem__ and __iter__ work as normal for a Python sequence, including
support for extended slices.>>> list(emily.log) == [emily.log[0], emily.log[1], emily.log[2],
... emily.log[3], emily.log[4]]
True
>>> emily.log[-1] is emily.log[4]
True
>>> emily.log[0] is emily.log[-5]
True
>>> emily.log[5]
Traceback (most recent call last):
...
IndexError: list index out of range
>>> emily.log[-6]
Traceback (most recent call last):
...
IndexError: list index out of range
>>> emily.log[4:2:-1] == [emily.log[4], emily.log[3]]
TrueThe log’s record_schema may be changed as long as there are no logs or the
interface extends (or is) the interface for the last log.>>> emily.log.record_schema = IConsumableRecord # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Once entries have been made, may only change schema to one...
>>> class IExtendedCat(ICat):
... parent_object_intid = schema.Int(title=u"Parent Object")
...
>>> emily.log.record_schema = IExtendedCat
>>> emily.log.record_schema = ICat
>>> emily.log.record_schema = IExtendedCat
>>> class ExtendedCatAdapter(object):
... interface.implements(IExtendedCat)
... def __init__(self, cat): # getAge is left off
... self.name = cat.name
... self.weight = cat.weight
... self.location = cat.location
... self.parent_object_intid = 42
...
>>> ztapi.provideAdapter((ICat,), IExtendedCat, ExtendedCatAdapter)
>>> entry = emily.log(u'First time with extended interface')
>>> emily.log.record_schema = ICat # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Once entries have been made, may only change schema to one...
>>> emily.log[5].record_changes == {
... 'parent_object_intid': 42, 'getAge': None}
True
>>> record = emily.log[5].record
>>> record.parent_object_intid
42
>>> record.name
u'Emily'
>>> record.location
u'On bed'
>>> record.weight
17
>>> record.getAge() # None
>>> IExtendedCat.providedBy(record)
True
>>> old_record = emily.log[3].record
>>> IExtendedCat.providedBy(old_record)
False
>>> ICat.providedBy(old_record)
True
>>> old_record.parent_object_intid # doctest: +ELLIPSIS
Traceback (most recent call last):
...
AttributeError: ...Entries support convenience next and previous attributes, which make them
act like immutable doubly linked lists:>>> entry = emily.log[5]
>>> entry.previous is emily.log[4]
True
>>> entry.next # None
>>> entry.previous.previous.previous.previous.previous is emily.log[0]
True
>>> emily.log[0].previous # None
>>> emily.log[0].next is emily.log[1]
TrueObjectlogs also support deferring until the end of a transaction. To show
this, we will need a sample database, a transaction, and key reference
adapters. We show the simplest example first.>>> from ZODB.tests import util
>>> import transaction>>> db = util.DB()
>>> connection = db.open()
>>> root = connection.root()
>>> root["emily"] = emily
>>> transaction.commit()
>>> import zope.app.keyreference.persistent
>>> import zope.app.keyreference.interfaces
>>> import ZODB.interfaces
>>> import persistent.interfaces
>>> from zope import component
>>> component.provideAdapter(
... zope.app.keyreference.persistent.KeyReferenceToPersistent,
... (persistent.interfaces.IPersistent,),
... zope.app.keyreference.interfaces.IKeyReference)
>>> component.provideAdapter(
... zope.app.keyreference.persistent.connectionOfPersistent,
... (persistent.interfaces.IPersistent,),
... ZODB.interfaces.IConnection)>>> len(emily.log)
6
>>> emily.log(u'This one is deferred', defer=True) # returns None: deferred!
>>> len(emily.log)
6
>>> transaction.commit()
>>> len(emily.log)
7
>>> emily.log[6].summary
u'This one is deferred'
>>> emily.log[6].record_changes
{}While this is interesting, the point is to capture changes to the object,
whether or not they happened when the log was called. Here is a more pertinent
example, then.>>> len(emily.log)
7
>>> emily.weight = 16
>>> emily.log(u'Also deferred', defer=True) # returns None: deferred!
>>> len(emily.log)
7
>>> emily.location = COUCH
>>> transaction.commit()
>>> len(emily.log)
8
>>> emily.log[7].summary
u'Also deferred'
>>> import pprint
>>> pprint.pprint(emily.log[7].record_changes)
{'location': u'On couch', 'weight': 16}Multiple deferred log entries can be deferred, if desired.>>> emily.log(u'One log', defer=True)
>>> emily.log(u'Two log', defer=True)
>>> len(emily.log)
8
>>> transaction.commit()
>>> len(emily.log)
10
>>> emily.log[8].summary
u'One log'
>>> emily.log[9].summary
u'Two log'Another option is if_changed. It should not make a log unless there was a
change.>>> len(emily.log)
10
>>> emily.log(u'If changed', if_changed=True) # returns None: no change!
>>> len(emily.log)
10
>>> emily.location = BED
>>> entry = emily.log(u'If changed', if_changed=True)
>>> len(emily.log)
11
>>> emily.log[10] is entry
True
>>> entry.summary
u'If changed'
>>> pprint.pprint(entry.record_changes)
{'location': u'On bed'}
>>> transaction.commit()The two options, if_changed and defer, can be used together. This makes for
a log entry that will only be made at a transition boundary if there have
been no previous changes. Note that a log entry that occurs whether or not
changes were made (hereafter called a “required” log entry) that is also
deferred will always eliminate any deferred if_changed log entry, even if the
required log entry was registered later in the transaction.>>> len(emily.log)
11
>>> emily.log(u'Another', defer=True, if_changed=True) # returns None
>>> transaction.commit()
>>> len(emily.log)
11
>>> emily.log(u'Yet another', defer=True, if_changed=True) # returns None
>>> emily.location = COUCH
>>> len(emily.log)
11
>>> transaction.commit()
>>> len(emily.log)
12
>>> emily.log[11].summary
u'Yet another'
>>> emily.location = KITCHEN
>>> entry = emily.log(u'non-deferred entry', if_changed=True)
>>> len(emily.log)
13
>>> entry.summary
u'non-deferred entry'
>>> emily.log(u'will not write', defer=True, if_changed=True)
>>> transaction.commit()
>>> len(emily.log)
13
>>> emily.log(u'will not write', defer=True, if_changed=True)
>>> emily.location = WORKING
>>> emily.log(u'also will not write', defer=True, if_changed=True)
>>> emily.log(u'required, deferred', defer=True)
>>> len(emily.log)
13
>>> transaction.commit()
>>> len(emily.log)
14
>>> emily.log[13].summary
u'required, deferred'This should all work in the presence of multiple objects, of course.>>> sam = Cat(u'Sam', 20, 4)
>>> root['sam'] = sam
>>> transaction.commit()
>>> sam.weight = 19
>>> sam.log(u'Auto log', defer=True, if_changed=True)
>>> sam.log(u'Sam lost weight!', defer=True)
>>> sam.log(u'Saw sam today', defer=True)
>>> emily.log(u'Auto log', defer=True, if_changed=True)
>>> emily.weight = 15
>>> transaction.commit()
>>> len(sam.log)
2
>>> sam.log[0].summary
u'Sam lost weight!'
>>> sam.log[1].summary
u'Saw sam today'
>>> len(emily.log)
15
>>> emily.log[14].summary
u'Auto log'>>> # TEAR DOWN
>>> zope.security.management.endInteraction()
>>> ztapi.unprovideUtility(zope.app.security.interfaces.IAuthentication)Changes0.2 (2008-05-16)removed dependency on zc.security; loosened restrictions on principal_ids.0.1.1 (2008-04-02)Updated path in setup.py for setup to work on platforms other than linux.0.1 (2008-04-02)Initial release (removeddevstatus) |
zcode | Zee CodeZCode is a custom compression algorithm I originally developed for a competition held for the Spring 2019 Datastructures
and Algorithms course ofDr. Mahdi Safarnejad-BoroujeniatSharif University of Technology, at which I became
first-place. The code is pretty slow and has a lot of room for optimization, but it is pretty readable. It can be an
excellent educational resource for whoever is starting on compression algorithms.The algorithm is a cocktail of classical compression algorithms mixed and served for Unicode documents. It hinges around
theLZW algorithmto create a finite size symbol dictionary; the results are then byte-coded into variable-length custom
symbols, which I callzeecodes! Finally, the symbol table is truncated accordingly, and the compressed document is
encoded into a byte stream.Huffman treeshighly inspirezeecodes, but because in normal texts, symbols are usually much more uniformly distributed
than the original geometrical (or exponential) distribution assumption for effective Huffman coding, the gains of using
variable-sized byte-codes both from an implementation and performance perspective outweighed bit Huffman encodings.
Results may vary, but my tests showed a steady ~4-5x compression ratio on Farsi texts, which is pretty nice!InstallationZCode is available on pip, and only requires a 3.6 or higher python installation beforehand.pipinstall-UzcodeUsageYou can run the algorithm for anyutf-8encoded file using thezcodecommand. It will automatically decompress files
ending with a.zeeextensions and compress others into.zeefiles, but you can always override the default behavior
by providing optional arguments like:zcodeINPUTFILE[--outputOUTPUT_FILE--actioncompress/decompress--symbol-sizeSYMBOL_SIZE--code-sizeCODE_SIZE]Thesymbol-sizeargument controls the algorithms' buffer size for processing symbols (in bytes). It is automatically
set depending on your input file size but you can change it as you wish.code-sizecontrols the maximum length for
coded bytes while encoding symbols (this equals to 2 by default and needs to be provided to the algorithm upon
decompression).LICENSEMIT LICENSE, seevahidzee/zcode/LICENSE |
zcode-system-discount | zcode system discountpip3zcodesystemdiscount |
zcodevars | No description available on PyPI. |
zcoinbase | No description available on PyPI. |
zcollection | This project is a Python library allowing manipulating data partitioned into acollectionofZarrgroups.This collection allows dividing a dataset into several partitions to facilitate
acquisitions or updates made from new products. Possible data partitioning is:
bydate(hour, day, month, etc.) or bysequence.A collection partitioned by date, with a monthly resolution, may look like on
the disk:collection/
├── year=2022
│ ├── month=01/
│ │ ├── time/
│ │ │ ├── 0.0
│ │ │ ├── .zarray
│ │ │ └── .zattrs
│ │ ├── var1/
│ │ │ ├── 0.0
│ │ │ ├── .zarray
│ │ │ └── .zattrs
│ │ ├── .zattrs
│ │ ├── .zgroup
│ │ └── .zmetadata
│ └── month=02/
│ ├── time/
│ │ ├── 0.0
│ │ ├── .zarray
│ │ └── .zattrs
│ ├── var1/
│ │ ├── 0.0
│ │ ├── .zarray
│ │ └── .zattrs
│ ├── .zattrs
│ ├── .zgroup
│ └── .zmetadata
└── .zcollectionPartition updates can be set to overwrite existing data with new ones or to
update them using differentstrategies.TheDask libraryhandles the data to scale the treatments
quickly.It is possible to create views on a reference collection, to add and modify
variables contained in a reference collection, accessible in reading only.This library can store data on POSIX, S3, or any other file system supported by
the Python libraryfsspec. Note, however, only POSIX
and S3 file systems have been tested. |
zcomm | No description available on PyPI. |
zcommand | No description available on PyPI. |
zcommands-zx | [^_^]:
名称: 小发明系列-zfind背景我们经常会使用 find 命令, 奈何 find命令实在不怎么好用, 于是写一个python 脚本来包装 find命令,让它更友好一些,使用它可以极高的提高效率.
它可以让我们以更少的输入来快速完成原来很复杂的查询, 而且会打印出生成的底层语句, 来看几个例子吧例子以前我要使用 find 查找当前目录下的"名称中包含XX的后缀是XX的文档", 而且希望它能忽略大小写, 能查找目录软链接下的内容, 我必须写很长的参数.case1 简单初体验比如我之前想查找 当前目录下,名称中含有 make 的markdown 文件, 那么我必须写成:find -L . -iname "*make*.md" -type f作为对比, 我现在只用写:zfind make实际效果如下:➜interviewzfindmake
thecommandis:find-L.-iname"*make*.md"-typef
./writings/cpp_rank/25_0_什么是Cmake_28.md
./writings/cpp-interview/cpp_rank/25_0_什么是Cmake_28.md
./writings/cpp-interview/cpp_basic/18_0_make的用法_11298.md
./writings/cpp-interview/cpp_basic/12_0_make的用法_11291.md
./htmls/cpp-html/make/@makefile写法.htm.md
./htmls/cpp-html/make/@make命令零基础教程.html.md
./htmls/cpp-html/make/@CMakeTutorial.htm.md
./cpp-interview/cpp_rank/25_0_什么是Cmake_28.md
./cpp-interview/cpp_basic/18_0_make的用法_11298.md
./cpp-interview/cpp_basic/12_0_make的用法_11291.mdcase2 指定特定文件后缀查询再比如, 我想找 当前目录下,名称中含有 make 的 html 文件, 那么我必须写成find -L . -iname "*make*.html" -type f作为对比, 我现在只用写:zfind make -s html实际效果如下:➜interviewzfindmake-shtml
thecommandis:find-L.-iname"*make*.html"-typef
./htmls/cpp-html/make/Make命令零基础教程.html
./htmls/cpp-html/make/makefile-Whatisthedifferencebetween_make_and_makeall__-StackOverflow.htmlcase3 多种文件后缀查询再比如, 我想找 当前目录下,名称中含有 make 的 html和htm 文件, 那么我必须写成两条语句find -L . -iname "*make*.html" -type f和find -L . -iname "*make*.htm" -type f作为对比, 我现在只用写:zfind make -s html+htmcase4 排除特定路径有的时候,我不想找某个路径, 那么我可以使用 -e 来排除这个路径, e 是 exclude 的首字母. 它强制是模糊查询的.
我写这么一条语句:zfind find -e blog其实它会生成这么复杂的一条语句.the command is: find -L . -iname "*find*.md" -type f -print -o -path "*blog*" -prune使用方法1我将提供下面一段脚本, 只要你:将它命令为 zfind, 不要带.py后缀, 使用chmod a+x zfind成为可执行文件把它放到可执行文件的查找路径下使用 zfind 关键字 就可以愉快地使用了
因为我喜欢使用markdown 来写文档, 所以我默认让 find 命令查找 markdown 文件方法2pip install zcommands-zx脚本展示#! /Users/zxzx/.conda/envs/scrapy/bin/python #换成你本机的python解释器路径#coding=utf-8importos,syshelp_txt="""使用 zfind -h 来获取帮助使用 zfind 关键字 在当前目录查询含有`关键字`的md文档使用 zfind 关键字 -d 路径 在指定目录查询含有`关键字`的md文档使用 zfind 关键字 -d 路径 -s 文件后缀 在指定目录查询含有`关键字`的指定后缀文档"""######################## 准备变量search_dir=''keyword=''suffix=''type=''args=sys.argviflen(args)>1andargs[1]=='-h':print(help_txt)iflen(args)>=2:keyword=sys.argv[1]# 捕捉多余可选参数opt_args=args[2:]foriinrange(len(opt_args)):if(opt_args[i]=='-s'):suffix=opt_args[i+1]if(opt_args[i]=='-d'):search_dir=opt_args[i+1]if(opt_args[i]=='-t'):type=opt_args[i+1]######### 执行命令search_dir=search_diror'.'suffix=suffixor'md'type=typeor'f'iftype=='d':suffix=''else:suffix='.'+suffixcommand='find -L{}-iname "*{}*{}" -type{}'.format(search_dir,keyword,suffix,type)# example as: find . -iname "*@make*.md"######### 执行查询软链接命令print("the command is: ",command)ret=os.popen(command).readlines()forlineinret:print(line,end='')缺点目前还没有发现 |
zcommon | Please see the github repo and help @https://github.com/LamaAni/zcommon.py |
zcommon4py | How to installpip install zcommon4py # or pip3Loggerfromzcommon4pyimportZLoggerif__name__=="__main__":logger=ZLogger("logger")logger.debug("debug")logger.info("info")logger.warning("warning")logger.error("error")AuthorKhoiDD |
zcommons | zcommons for pythonA collection of common utils for python.RequirementsPython >= 3.6colorama, dataclasses(if python==3.6), sortedcontainersInstallpipinstallzcommons==0.2.0 |
zconcurrent | OverviewThis project allows you to execute a list of http operations asynchronously from within an synchronous context.It does not care whether you should do this. It simply allows you to do so if you desire.InstallingThe package is available via pip.pipinstallzconcurrentIf you're not on Windows, install the uvloop extra to increase performance.pipinstall"zconcurrent[uvloop]"UsageThe package can be imported as shown:fromzconcurrent.zsessionimportzSession,RequestMap,RequestResultsClassDescriptionzSessionSession object containing collection of requests to sendRequestMapContainer object that stores all info about an individual request to sendRequestResultsContainer object that stores the request responses and any exceptions raisedExample# Create RequestMap objectsreq1=RequestMap(url="https://baconipsum.com/api",httpOperation="GET",queryParams={"type":"meat-and-filler","format":"json"},)req2=RequestMap(url="https://baconipsum.com/api",httpOperation="GET",queryParams={"type":"all-meat","format":"json"},)req3=RequestMap(url="https://baconipsum.com/api",httpOperation="GET",queryParams={"type":"meat-and-filler","format":"json"},)# Create zSession and call sendRequests()session=zSession(requestMaps=[req1,req2,req3])reqResps:RequestResults=session.sendRequests(return_exceptions=True)# Handle exceptions raised for individual requestsiflen(reqResps.taskExceptions)>0:print("Handling exceptions")# Handle responses for individual requestsforrespinrequestResponses:httpVerb=resp.requestMap.httpOperationprint(f"Evaluating response for{httpVerb}request to{resp.requestMap.url}")print(f"Status Code:{resp.statusCode}")ifresp.bodyisnotNone:print(resp.body)RequestMap ClassclassRequestMap(msgspec.Struct):url:strhttpOperation:Literal["GET","POST","PUT","PATCH","OPTIONS","DELETE"]body:dict|None=NonequeryParams:dict[str,str]|None=Noneheaders:dict[str,str]|None=NoneRequestResponse ClassclassRequestResponse(msgspec.Struct):requestMap:RequestMapstatusCode:intbody:dict|None=NoneRequestResults Class@dataclassclassRequestResults:requestResponses:list[RequestResponse]taskExceptions:list[BaseException] |
zconfigparser | This is a python3 (3.6+) library
to provide some section inheritance functionality
toconfigparser.For details, seedocumentation.LicenseThe software is licensed under The MIT License. SeeLICENSE. |
zconfig-watchedfile | zconfig_watchedfileProvides a ZConfig statement to register a logging handler that uses aWatchedFileHandler, which is helpful for integrating with an external
logrotate service:%import zconfig_watchedfile
<logger>
name example
<watchedfile>
path /path/to/logfile.log
</watchedfile>
</logger>The<watchedfile>supports both the default ZConfig settings for handlers
(formatter, dateformat, level) and the parameters ofWatchedFileHandler(mode, encoding, delay).This package is compatible with Python version 3.8 up to 3.11.Change log for zconfig_watchedfile2.0 (2023-08-17)Drop support for all Python versions older than 3.8.Add support for Python 3.9, 3.10, 3.11.1.2 (2019-12-04)Migrated to github.Add support for Python 3.7 and 3.8.1.1 (2019-01-25)Makesetup.pycompatible with newersetuptoolsversions.Drop support for Python 2.6.1.0 (2013-11-29)initial release |
zcons | No description available on PyPI. |
zconsole | Failed to fetch description. HTTP Status Code: 404 |
zconst | 常量常用包包括三种常量实现方式1.基类继承from zconst.const_base import const
class my_const(const):
a = 1
my_const = my_const()
print(my_const.a)
my_const.a = 12.装饰器from zconst.const_decorator import const
@const
class my_const():
a = 1
print(my_const.a)
my_const.a = 13.元类metaclassfrom zconst.const_metaclass import const
class my_const(metaclass=const):
a = 1
print(my_const.a)
my_const.a = 1 |
zcontact | ZContact is an online contact management application built on the
Zope3 web application framework. Below are instructions for managing
ZContact on Ubuntu Linux. With some tweaks, this might even work on
Mac OSX and Windows.Quick StartFollow these instructions to install ZContact and create a default
server setup.Install dependencies if they are not installed already (most of
these dependencies are from Zope 3):$ sudo apt-get install build-essential python-all python-all-dev
libc6-dev libicu-dev python-setuptoolsInstall ZContact:$ sudo easy_install-2.4 zcontactCreate an “instance” of zcontact (including server configuration,
log files and database) called “MyZContactServer”. Feel free to
replace MyZContactServer with whatever you want, or leave it blank and
it will default to just “zcontact”:$ paster make-config zcontact MyZContactServerGo to the newly created configuration area for your zcontact
instance and start the server:$ cd MyZContactServer
$ paster serve deploy.iniZContact will now be available athttp://localhost:8080.Updating Your ZContact InstallationTo update your ZContact application, simply run the following command
and restart your server.$ sudo easy_install-2.4 -U zcontact(the -U stands for “Update”).Running ZContact as a DaemonTo run ZContact as a daemon, go to the directory where your ZContact
instance is located and type:$ paster serve deploy.ini –daemonThe running daemon can be stopped with:$ paster serve deploy.ini stopMigrating DataTo migrate data from one zcontact server to another follow these
steps:Make sure both zcontact instances arenotrunning.Copy the database file you want to migrate to the new instance.
The database file is located in the var/ directory of the ZContact
instance and is called Data.fs. You do not need to move any of the
Data.fs.* files.Restart your ZContact instance.Developer InstallationIf you want to setup ZContact as a developer (i.e. from a repository
checkout) rather than installing it as an egg on your system, follow
these steps:Grab a branch of the latest ZContact code from Launchpad:$ bzr branch http://bazaar.launchpad.net/~pcardune/zcontact/zcontact-lp
(Note: you can also use bzr checkout instead of bzr branch if you
don't want to get all the revision information)Change to the directory where you just create the branch:$ cd zcontact-lpRun make:$ make
(Note: This will run the bootstrap.py script which sets up buildout,
and it will invoke buildout which downloads all the necessary eggs
to the eggs/ directory. If you have a common place where you have
development eggs available, you should modify buildout.cfg before
running make.)Run the tests:$ make testCreate the configuration:$ make install
(This adds the var and log directories along with a deploy.ini,
site.zcml, and zope.conf in the checkout)Start the server:$ make runGenerate test coverage reports:$ make coverageNOTE: if you get errors about setuptools not being the right version,
then you need to install the easy_install script and run:$ sudo easy_install-2.4 -U setuptools(The -U option forces setuptools to look online for the latest
updates)If you don’t like using make, or you are not on a Linux system, then
try the following:$ python bootstrap.py
$ ./bin/buildout -vNA note to the wise: It seems to be the consensus of the Zope
community that one should never use the standard system python to run
your software because you might screw it up. And screwing up system
pythong is not a good idea if you can avoid it. So to really do this
properly, you should install your own python by actually downloading
the src, compiling it, and installing it to some place like
/opt/mypython. Then when you install the checkout, use:$ /opt/mypython/bin/python bootstrap.py
$ ./bin/buildout -vNAnd that will be best.Getting More InformationContact me on chat.freenode.net. My most common username is pcardune
and I hang around #schooltool and #zope3-dev. Otherwise, email me at
paul_at_carduner_dot_netPlease send me requests for other instructions you want to be put into
this README file.place holder for changes |
zcontroller | UNKNOWN |
zcooldl | ZCool DownloaderZCool picture crawler. Download ZCool (https://www.zcool.com.cn/) Designer’s or User’s pictures, photos and illustrations.Free software: MIT licenseDocumentation:https://zcooldl.readthedocs.io.Features极速下载:多线程异步下载,可以根据需要设置线程数异常重试:只要重试次数足够多,就没有下载不下来的图片 (^o^)/!增量下载:设计师/用户有新的上传,再跑一遍程序就行了 O(∩_∩)O 嗯!自选主题:可以选择下载用户的特定主题,而不是该用户下的所有内容下载收藏夹New:使用-c <收藏夹 URL, …>下载收藏夹中的作品(收藏夹可自由创建)QuickstartInstall zcooldl via pip:$pipinstall-UzcooldlDownload all username’s pictures to current directory:$zcooldl-u<username>CreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.4 (2020.12.01)新增功能:新参数-c <收藏夹 URL, …>,支持下载收藏夹中的作品0.1.3 (2020.07.22)首次发布到 PyPI修复了在动态加载页面中无法获取并下载所有图片的问题保存的图片文件名中加入了序号,以保持原始顺序添加了注释,并对代码细节做了调整2020.03.25优化了终端输出信息,用不同颜色文字进行了标识修复了在低网速下无法下载图片的问题,并加快了整体下载速度0.1.2 (2020.03.24)新功能:新增下载超清原图(默认选项,约几 MB),使用参数–thumbnail下载缩略图(宽最大 1280px,约 500KB)新增支持下载 JPG、PNG、GIF、BMP 格式的图片0.1.1 (2019.12.09)新功能:可以选择下载用户的特定主题支持一次性输入多个用户名或 IDBUG 修复:修复用户如果没有上传任何图片时的下载错误0.1.0 (2019.09.09)主要功能:极速下载:多线程异步下载,可以根据需要设置线程数异常重试:只要重试次数足够多,就没有下载不下来的图片 (^o^)/增量下载:设计师/用户有新的上传,再跑一遍程序就行了 O(∩_∩)O 嗯!支持代理:可以配置使用代理(0.1.3 版本后改为自动读取系统代理) |
zc.openlayers | UNKNOWN |
zcore | test |
zcp | ObjectiveThis project started as a way to integrate monitoring information collected in a Cloud environment,
namely by OpenStack’s Ceilometer, integrating it with an already existing monitoring solution using Zabbix.FeaturesIntegration of OpenStack’s available monitoring information (e.g. using Ceilometer) with already existing
Monitoring systems (e.g. Zabbix);Automatically gather information about the existing Cloud Infrastructure being considered (tenants, instances);Seamlessly handle changes in the Cloud Infrastructure (creation and deletion of tenants and/or instances);Periodically retrieve resources/meters details from OpenStack;Allow to have one common monitoring system (e.g Zabbix) for several OpenStack-based Cloud Data Centres;Support keystone v3 to allow multiple domains using multiple proxies;Support rabbitmq clusters to consume messages from topics of keystone and nova;Provide default template(Template ZCP) to import through zabbix web interface;Provide mongo driver to retrive metrics from Ceilometer mongodb directly.RequirementsThe Zabbix-Ceilometer Proxy was written using _Python_ version 2.7.5 but can be easily ported to version 3.
It uses the Pika library for support of AMQP protocol, used by OpenStack.For installing Pika, if you already have _Python_ and the _pip_ packet manager configured, you need only to
use a terminal/console and simply run following command under the project directory:sudo pip install -r requirement.txtIf the previous command fails, download and manually install the library on the host where you intend to
run the ZCP.NoteSince the purpose of this project is to be integrated with OpenStack and Zabbix it is assumed
that apart from a running installation of these two, some knowledge of OpenStack has already
been acquired.UsageAssuming that all the above requirements are met, the ZCP can be run with 3 simple steps:On your OpenStack installation point to your Keystone configuration file (keystone.conf) and
updatenotification_driverto messaging(only support this driver for now):notification_driver = messagingRemember to modify ceilometerevent_pipline.yaml. When the setup of notification_driver is done,
a number of events ofidentity.authenticatewill be put into ceilometer queue(notification.sample).
There is no sense if record those events. The sample configuration in/etc/ceilometer/event_pipeline.yamlfollows:| sources:
| - name: event_source
| events:
| - "*"
| - "!identity.authenticate"
| sinks:
| - event_sink
| sinks:
| - name: event_sink
| transformers:
| triggers:
| publishers:
| - notifier://Create directory for ZCP’s log file and configuration file:$ sudo mkdir /var/log/zcp/
$ sudo mkdir /etc/zcp/Copyproxy.confto/etc/zcp/and edit theproxy.confconfiguration file to reflect your own system,
including the IP addresses and ports of Zabbix and of the used OpenStack modules (RabbitMQ, Ceilometer
Keystone and Nova). You can also tweak some ZCP internal configurations such as the polling interval and
proxy name (used in Zabbix):$ sudo cp etc/proxy.conf /etc/zcp/proxy.confInstall zcp source code:$ python setup.py installAdd template name(UseTemplate ZCPas default) under ‘zcp_configs’ and import the template to Zabbix
through Zabbix Web Interface. You can seeTemplate ZCPin ZabbixTemplatesif import success.Finally, run the Zabbix-Ceilometer Proxy in your console:$ eszcp-pollingIf all goes well the information retrieved from OpenStack’s Ceilometer will be pushed in your Zabbix
monitoring system.NoteYou can check out ademofrom a premilinary version of ZCP running with OpenStack Havana and Zabbix.SourceIf not doing so already, you can check out the latest version ofZCP.CopyrightCopyright (c) 2014 OneSource Consultoria Informatica, Lda.Copyright (c) 2017 EasyStack Inc.Thanks Cláudio Marques, David Palma and Luis Cordeiro for the original idea.This project has been developed for the demand of Industrial Bank Co., Ltd by Branty and Hanxi Liu. |
zc.parse_addr | Parse network addresses of the form: HOST:PORT>>> import zc.parse_addr
>>> zc.parse_addr.parse_addr('1.2.3.4:56')
('1.2.3.4', 56)It would be great if this little utility function was part
of the socket module. |
zcpm | No description available on PyPI. |
zc.queue | Persistent QueuesPersistent queues are simply queues that are optimized for persistency via the
ZODB. They assume that the ZODB is using MVCC to avoid read conflicts. They
attempt to resolve write conflicts so that transactions that add and remove
objects simultaneously are merged, unless the transactions are trying to
remove the same value from the queue.An important characteristic of these queues is that they do not expect to
hold more than one reference to any given equivalent item at a time. For
instance, some of the conflict resolution features will not perform
desirably if it is reasonable for your application to hold two copies of the
string “hello” within the same queue at once[1].The module provides two flavors: a simple persistent queue that keeps all
contained objects in one persistent object (Queue), and a
persistent queue that divides up its contents into multiple composite
elements (CompositeQueue). They should be equivalent in terms of
API and so are mostly examined in the abstract in this document: we’ll generate
instances with a representativeQueuefactory, that could be either class.
They only differ in an aspect of their write conflict resolution behavior,
which is discussed below.Queues can be instantiated with no arguments.>>> q = Queue()
>>> from zc.queue.interfaces import IQueue
>>> from zope.interface.verify import verifyObject
>>> verifyObject(IQueue, q)
TrueThe basic API is simple: useputto add items to the back of the queue, andpullto pull things off the queue, defaulting to the front of the queue.
Note thatItemcould be either persistent or non persistent object.>>> q.put(Item(1))
>>> q.put(Item(2))
>>> q.pull()
1
>>> q.put(Item(3))
>>> q.pull()
2
>>> q.pull()
3Thepullmethod takes an optional zero-based index argument, and can accept
negative values.>>> q.put(Item(4))
>>> q.put(Item(5))
>>> q.put(Item(6))
>>> q.pull(-1)
6
>>> q.pull(1)
5
>>> q.pull(0)
4Requesting an item from an empty queue raises an IndexError.>>> q.pull() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
IndexError: ...Requesting an invalid index value does the same.>>> q.put(Item(7))
>>> q.put(Item(8))
>>> q.pull(2) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
IndexError: ...Beyond these core queue operations, queues support len…>>> len(q)
2
>>> q.pull()
7
>>> len(q)
1
>>> q.pull()
8
>>> len(q)
0…iter (which doesnotempty the queue)…>>> next(iter(q))
Traceback (most recent call last):
...
StopIteration
>>> q.put(Item(9))
>>> q.put(Item(10))
>>> q.put(Item(11))
>>> next(iter(q))
9
>>> [i for i in q]
[9, 10, 11]
>>> q.pull()
9
>>> [i for i in q]
[10, 11]…bool…>>> bool(q)
True
>>> q.pull()
10
>>> q.pull()
11
>>> bool(q)
False…and list-like bracket access (which again doesnotempty the queue).>>> q.put(Item(12))
>>> q[0]
12
>>> q.pull()
12
>>> q[0] # doctest: +ELLIPSIS
Traceback (most recent call last):
...
IndexError: ...
>>> for i in range (13, 23):
... q.put(Item(i))
...
>>> q[0]
13
>>> q[1]
14
>>> q[2]
15
>>> q[-1]
22
>>> q[-10]
13That’s it–there’s no additional way to add anything beyondput, and no
additional way to remove anything beyondpull.The only other wrinkle is the conflict resolution code. Conflict
resolution in ZODB has some general caveats of which you should be aware[2].These general caveats aside, we’ll now examine some examples of zc.queue
conflict resolution at work. To show this, we will have to have two
copies of the same queue, from two different connections.NOTE: this testing approach has known weaknesses. See discussion in tests.py.>>> import transaction
>>> from zc.queue.tests import ConflictResolvingMappingStorage
>>> from ZODB import DB
>>> db = DB(ConflictResolvingMappingStorage('test'))
>>> transactionmanager_1 = transaction.TransactionManager()
>>> transactionmanager_2 = transaction.TransactionManager()
>>> connection_1 = db.open(transaction_manager=transactionmanager_1)
>>> root_1 = connection_1.root()>>> q_1 = root_1["queue"] = Queue()
>>> transactionmanager_1.commit()>>> transactionmanager_2 = transaction.TransactionManager()
>>> connection_2 = db.open(transaction_manager=transactionmanager_2)
>>> root_2 = connection_2.root()
>>> q_2 = root_2['queue']Now we have two copies of the same queue, with separate transaction managers
and connections about the same way two threads would have them. The ‘_1’
suffix identifies the objects for user 1, in thread 1; and the ‘_2’ suffix
identifies the objects for user 2, in a concurrent thread 2.First, let’s have the two users add some items to the queue concurrently.
For concurrent commits of only putting a single new item (one each in two
transactions), in both types of queue the user who commits first gets the
lower position in the queue–that is, the position that will leave the queue
sooner using defaultpullcalls.In this example, even though q_1 is modified first, q_2’s transaction is
committed first, so q_2’s addition is first after the merge.>>> q_1.put(Item(1001))
>>> q_2.put(Item(1000))
>>> transactionmanager_2.commit()
>>> transactionmanager_1.commit()
>>> connection_1.sync()
>>> connection_2.sync()
>>> list(q_1)
[1000, 1001]
>>> list(q_2)
[1000, 1001]For commits of more than one additions per connection of two, or of more than
two concurrent adding transactions, the behavior is the same for the
Queue: the first commit’s additions will go before the second
commit’s.>>> from zc import queue
>>> if isinstance(q_1, queue.Queue):
... for i in range(5):
... q_1.put(Item(i))
... for i in range(1002, 1005):
... q_2.put(Item(i))
... transactionmanager_2.commit()
... transactionmanager_1.commit()
... connection_1.sync()
... connection_2.sync()
...As we’ll see below, that will again reliably put all the values from the first
commit earlier in the queue, to result in
[1000, 1001, 1002, 1003, 1004, 0, 1, 2, 3, 4].For the CompositeQueue, the behavior is different. The order
will be maintained with a set of additions in a transaction, but the values
may be merged between the two transactions’ additions. We will compensate
for that here to get a reliable queue state.>>> if isinstance(q_1, queue.CompositeQueue):
... for i1, i2 in ((1002, 1003), (1004, 0), (1, 2), (3, 4)):
... q_1.put(Item(i1))
... q_2.put(Item(i2))
... transactionmanager_1.commit()
... transactionmanager_2.commit()
... connection_1.sync()
... connection_2.sync()
...Whichever kind of queue we have, we now have the following values.>>> list(q_1)
[1000, 1001, 1002, 1003, 1004, 0, 1, 2, 3, 4]
>>> list(q_2)
[1000, 1001, 1002, 1003, 1004, 0, 1, 2, 3, 4]If two users try to add the same item, then a conflict error is raised.>>> five = Item(5)
>>> q_1.put(five)
>>> q_2.put(five)
>>> transactionmanager_1.commit()
>>> from ZODB.POSException import ConflictError, InvalidObjectReference
>>> try:
... transactionmanager_2.commit() # doctest: +ELLIPSIS
... except (ConflictError, InvalidObjectReference):
... print("Conflict Error")
Conflict Error
>>> transactionmanager_2.abort()
>>> connection_1.sync()
>>> connection_2.sync()
>>> list(q_1)
[1000, 1001, 1002, 1003, 1004, 0, 1, 2, 3, 4, 5]
>>> list(q_2)
[1000, 1001, 1002, 1003, 1004, 0, 1, 2, 3, 4, 5]Users can also concurrently remove items from a queue…>>> q_1.pull()
1000
>>> q_1[0]
1001>>> q_2.pull(5)
0
>>> q_2[5]
1>>> q_2[0] # 1000 value still there in this connection
1000>>> q_1[4] # 0 value still there in this connection.
0>>> transactionmanager_1.commit()
>>> transactionmanager_2.commit()
>>> connection_1.sync()
>>> connection_2.sync()
>>> list(q_1)
[1001, 1002, 1003, 1004, 1, 2, 3, 4, 5]
>>> list(q_2)
[1001, 1002, 1003, 1004, 1, 2, 3, 4, 5]…as long as they don’t remove the same item.>>> q_1.pull()
1001
>>> q_2.pull()
1001
>>> transactionmanager_1.commit()
>>> transactionmanager_2.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ConflictError: ...
>>> transactionmanager_2.abort()
>>> connection_1.sync()
>>> connection_2.sync()
>>> list(q_1)
[1002, 1003, 1004, 1, 2, 3, 4, 5]
>>> list(q_2)
[1002, 1003, 1004, 1, 2, 3, 4, 5]There’s actually a special case: the composite queue’s buckets will refuse to
merge if they started with a non-empty state, and one of the two new states
is empty. This is to prevent the loss of an addition to the queue. See
tests.py for an example.Also importantly, users can concurrently remove and add items to a queue.>>> q_1.pull()
1002
>>> q_1.pull()
1003
>>> q_1.pull()
1004
>>> q_2.put(Item(6))
>>> q_2.put(Item(7))
>>> transactionmanager_1.commit()
>>> transactionmanager_2.commit()
>>> connection_1.sync()
>>> connection_2.sync()
>>> list(q_1)
[1, 2, 3, 4, 5, 6, 7]
>>> list(q_2)
[1, 2, 3, 4, 5, 6, 7]As a final example, we’ll show the conflict resolution code under extreme
duress, with multiple simultaneous puts and pulls.>>> res_1 = []
>>> for i in range(6, -1, -2):
... res_1.append(q_1.pull(i))
...
>>> res_1
[7, 5, 3, 1]
>>> res_2 = []
>>> for i in range(5, 0, -2):
... res_2.append(q_2.pull(i))
...
>>> res_2
[6, 4, 2]
>>> for i in range(8, 12):
... q_1.put(Item(i))
...
>>> for i in range(12, 16):
... q_2.put(Item(i))
...
>>> list(q_1)
[2, 4, 6, 8, 9, 10, 11]
>>> list(q_2)
[1, 3, 5, 7, 12, 13, 14, 15]
>>> transactionmanager_1.commit()
>>> transactionmanager_2.commit()
>>> connection_1.sync()
>>> connection_2.sync()After these commits, if the queue is a Queue, the new values are
in the order of their commit. However, as discussed above, if the queue is
a CompositeQueue the behavior is different. While the order will be
maintained comparitively–so if objectAis ahead of objectBin the queue
on commit thenAwill still be ahead ofBafter a merge of the conflicting
transactions–values may be interspersed between the two transactions.For instance, if our example queue were a Queue, the values would
be [8, 9, 10, 11, 12, 13, 14, 15]. However, if it were a
CompositeQueue, the values might be the same, or might be any
combination in which [8, 9, 10, 11] and [12, 13, 14, 15], from the two
transactions, are still in order. One ordering might be
[8, 9, 12, 13, 10, 11, 14, 15], for instance.>>> if isinstance(q_1, queue.Queue):
... res_1 = list(q_1)
... res_2 = list(q_2)
... elif isinstance(q_1, queue.CompositeQueue):
... firstsrc_1 = list(q_1)
... firstsrc_2 = list(q_2)
... secondsrc_1 = firstsrc_1[:]
... secondsrc_2 = firstsrc_2[:]
... for val in [12, 13, 14, 15]:
... firstsrc_1.remove(Item(val))
... firstsrc_2.remove(Item(val))
... for val in [8, 9, 10, 11]:
... secondsrc_1.remove(Item(val))
... secondsrc_2.remove(Item(val))
... res_1 = firstsrc_1 + secondsrc_1
... res_2 = firstsrc_2 + secondsrc_2
...
>>> res_1
[8, 9, 10, 11, 12, 13, 14, 15]
>>> res_2
[8, 9, 10, 11, 12, 13, 14, 15]>>> db.close() # cleanupPersistentReferenceProxyAsZODB.ConflictResolution.PersistentReferencedoesn’t get handled
properly insetdue to lack of__hash__method, we define a class
utilizing__cmp__method of contained items[3].Let’s make some stub persistent reference objects. Also make some
objects that have same oid to simulate different transaction states.>>> from zc.queue.tests import StubPersistentReference
>>> pr1 = StubPersistentReference(1)
>>> pr2 = StubPersistentReference(2)
>>> pr3 = StubPersistentReference(3)
>>> pr_c1 = StubPersistentReference(1)
>>> pr_c2 = StubPersistentReference(2)
>>> pr_c3 = StubPersistentReference(3)>>> pr1 == pr_c1
True
>>> pr2 == pr_c2
True
>>> pr3 == pr_c3
True
>>> id(pr1) == id(pr_c1)
False
>>> id(pr2) == id(pr_c2)
False
>>> id(pr3) == id(pr_c3)
False>>> set1 = set((pr1, pr2))
>>> set1
set([SPR (1), SPR (2)])
>>> len(set1)
2
>>> set2 = set((pr_c1, pr_c3))
>>> set2
set([SPR (1), SPR (3)])
>>> len(set2)
2
>>> set_c1 = set((pr_c1, pr_c2))
>>> set_c1
set([SPR (1), SPR (2)])
>>> len(set_c1)
2setdoesn’t handle persistent reference objects properly. All
following set operations produce wrong results.Deduplication (notice that for items longer than length two we’re only
checking the length and contents, not the ordering of the
representation, because that varies among different versions of Python):>>> set((pr1, pr_c1))
set([SPR (1), SPR (1)])
>>> set((pr2, pr_c2))
set([SPR (2), SPR (2)])
>>> set4 = set((pr1, pr_c1, pr2))
>>> len(set4)
3
>>> pr1 in set4 and pr_c1 in set4 and pr2 in set4
True
>>> set4 = set((pr1, pr2, pr3, pr_c1, pr_c2, pr_c3))
>>> len(set4)
6Minus operation:>>> set3 = set1 - set2
>>> len(set3)
2
>>> set3
set([SPR (1), SPR (2)])Contains:>>> pr3 in set2
FalseIntersection:>>> set1 & set2
set([])Compare:>>> set1 == set_c1
FalseSo we madePersistentReferenceProxywrappingPersistentReferenceto work with sets.>>> from zc.queue._queue import PersistentReferenceProxy
>>> prp1 = PersistentReferenceProxy(pr1)
>>> prp2 = PersistentReferenceProxy(pr2)
>>> prp3 = PersistentReferenceProxy(pr3)
>>> prp_c1 = PersistentReferenceProxy(pr_c1)
>>> prp_c2 = PersistentReferenceProxy(pr_c2)
>>> prp_c3 = PersistentReferenceProxy(pr_c3)
>>> prp1 == prp_c1
True
>>> prp2 == prp_c2
True
>>> prp3 == prp_c3
True
>>> id(prp1) == id(prp_c1)
False
>>> id(prp2) == id(prp_c2)
False
>>> id(prp3) == id(prp_c3)
False>>> set1 = set((prp1, prp2))
>>> set1
set([SPR (1), SPR (2)])
>>> len(set1)
2
>>> set2 = set((prp_c1, prp_c3))
>>> set2
set([SPR (1), SPR (3)])
>>> len(set2)
2
>>> set_c1 = set((prp_c1, prp_c2))
>>> set_c1
set([SPR (1), SPR (2)])
>>> len(set_c1)
2sethandles persistent reference properly now. All following set
operations produce correct results.Deduplication:>>> set4 = set((prp1, prp2, prp3, prp_c1, prp_c2, prp_c3))
>>> len(set4)
3
>>> set((prp1, prp_c1))
set([SPR (1)])
>>> set((prp2, prp_c2))
set([SPR (2)])
>>> set((prp1, prp_c1, prp2))
set([SPR (1), SPR (2)])Minus operation:>>> set3 = set1 - set2
>>> len(set3)
1
>>> set3
set([SPR (2)])
>>> set1 - set1
set([])
>>> set2 - set3
set([SPR (1), SPR (3)])
>>> set3 - set2
set([SPR (2)])Contains:>>> prp3 in set2
True
>>> prp1 in set2
True
>>> prp_c1 in set2
True
>>> prp2 not in set2
TrueIntersection:>>> set1 & set2
set([SPR (1)])
>>> set1 & set_c1
set([SPR (1), SPR (2)])
>>> set2 & set3
set([])Compare:>>> set1 == set_c1
True
>>> set1 == set2
False
>>> set1 == set4
False[1]The queue’spullmethod is actually the interesting part in why
this constraint is used, and it becomes more so when you allow an
arbitrary pull. By asserting that you do not support having equal
items in the queue at once, you can simply say that when you remove
equal objects in the current state and the contemporary, conflicting
state, it’s a conflict error. Ideally you don’t enter another equal
item in that queue again, or else in fact this is still an
error-prone heuristic:start queue == [X];begin transactions A and B;B removes X and commits;transaction C adds X and Y and commits;transaction A removes X and tries to commit, and conflict resolution
code believes that it is ok to remove the new X from transaction C
because it looks like it was just an addition of Y). Commit succeeds,
and should not.If you don’t assert that you can use equality to examine conflicts,
then you have to come up with another heuristic. Given that the
conflict resolution code only gets three states to resolve, I don’t
know of a reliable one.Therefore, zc.queue has a policy of assuming that it can use
equality to distinguish items. It’s limiting, but the code can have
a better confidence of doing the right thing.Also, FWIW, this is policy I want: for my use cases, it would be
possible to put in two items in a queue that handle the same issue.
With the right equality code, this can be avoided with the policy
the queue has now.[2]Here are a few caveats about the state (as of this
writing) of ZODB conflict resolution in general.The biggest is that, if you store persistent.Persistent subclass
objects in a queue (or any other collection with conflict resolution
code, such as a BTree), the collection will get a placeholder object
(ZODB.ConflictResolution.PersistentReference), rather than the real
contained object. This object has __cmp__ method, but doesn’t have
__hash__ method, The same oid will get different placeholder in the
different states because of different identity in memory (e.g.id(obj))
for conflict resolution, which is wrong behavior in a queue.Another is that, in ZEO, conflict resolution is currently done on
the server, so the ZEO server must have a copy of the classes
(software) necessary to instantiate any non-persistent object in the
collection.A corollary to the above is that objects such as
zope.app.keyreference.persistent, which are not persistent
themselves but rely on a persistent object for their __cmp__, will
fail during conflict resolution. A reasonable solution in the case
of zope.app.keyreference.persistent code is to have the object store
the information it needs to do the comparison on itself, so the
absence of the persistent object during conflict resolution is
unimportant.[3]The reason why we definedPersistentReferenceProxyis that there would be a significant risk
of unintended consequenses for some ZODB users if we add __hash__
method to PersistentReference.CHANGES2.0.1 (unreleased)Nothing changed yet.2.0.0 (2017-05-11)Dropped support for Python 2.6 and 3.3.Added support for Python 3.4, 3.5, 3.6 and PyPy.Fix using complex slices (e.g., negative strides) inCompositeQueue. The cost is higher memory usage.2.0.0a1 (2013-03-01)Added support for Python 3.3.Replaced deprecatedzope.interface.implementsusage with equivalentzope.interface.implementerdecorator.Dropped support for Python 2.4 and 2.5.Fixed an issue where slicing a composite queue would fail due to a
programming error.
[malthe]1.3 (2012-01-11)Fixed a conflict resolution bug that didn’t handleZODB.ConflictResolution.PersistentReferencecorrectly.
Note that due to syntax we require Python 2.5 or higher now.1.2.1 (2011-12-17)Fixed ImportError in setup.py.
[maurits]1.2 (2011-12-17)Fixed undefined ZODB.POSException.StorageTransactionError in tests.Let tests pass with ZODB 3.8 and ZODB 3.9.Added test extra to declare test dependency onzope.testing.Using Python’sdoctestmodule instead of deprecatedzope.testing.doctest.Clean up the generation of reST docs.1.1Fixed a conflict resolution bug in CompositeQueueRenamed PersistentQueue to Queue, CompositePersistentQueue to
CompositeQueue. The old names are nominally deprecated, although no
warnings are generated and there are no current plans to eliminate
them. The PersistentQueue class has more conservative conflict
resolution than it used to. (The Queue class has the same conflict
resolution as the PersistentQueue used to have.)1.0.1Minor buildout changesInitial release to PyPI1.0Initial release to zope.org |
zc.recipe.cmmi | Recipe installing a download via configure/make/make installThe configure-make-make-install recipe automates installation of
configure-based source distribution into buildouts.ContentsRecipe installing a download via configure/make/make installOptionsRelease History4.0 (2023-07-07)3.0.0 (2019-03-30)2.0.0 (2017-06-21)1.3.6 (2014-04-14)1.3.5 (2011-08-06)1.3.4 (2011-01-18)1.3.3 (2010-11-10)1.3.2 (2010-08-09)1.3.1 (2009-09-10)1.3 (2009-09-03)1.2.1 (2009-08-12)1.2.0 (2009-05-18)1.1.6 (2009-03-17)1.1.5 (2008-11-07)1.1.4 (2008-06-25)1.1.3 (2008-06-03)1.1.2 (2008-02-28)After 1.1.01.1.01.0.2 (2007-06-03)1.0.1 (2006-11-22)1.0 (2006-11-22)Detailed DocumentationDownload CacheOptionsurlThe URL of a source archive to downloadconfigure-commandThe name of the configure script.The option defaults to./configure.configure-optionsBasic configure options.Defaults to a--prefixoption that points to the part directory.extra_optionsA string of extra options to pass to configure inaddition tothe
base options.environmentOptional environment variable settings of the forme NAME=VALUE.Newlines are ignored. Spaces may be included in environment values
as long as they can’t be mistaken for environment settings. So:environment = FOO=bar
bazSets the environment variable FOO, but:environment = FOO=bar xxx=bazSets 2 environment values, FOO and xxx.patchThe name of an optional patch file to apply to the distribution.patch_optionsOptions to supply to the patch command (if a patch file is used).This defaults to-p0sharedShare the build accross buildouts.autogenThe name of a script to run to generate a configure script.source-directory-containsThe name of a file in the distribution’s source directory.This is used by the recipe to determine if it has found the source
directory. It defaults top “configure”.NoteThis recipe is not expected to work in a Microsoft Windows environment.Release History4.0 (2023-07-07)Drop support for Python 2.7, 3.5, 3.6.Add support for Python 3.9, 3.10, 3.11.3.0.0 (2019-03-30)Drop support for Python 3.4.Add support for Python 3.7 and 3.8a2.Flake8 the code.2.0.0 (2017-06-21)Add support for Python 3.4, 3.5, 3.6 and PyPy.Automated testing is enabled on Travis CI.1.3.6 (2014-04-14)Fixed: Strings were incorrectly compared using “is not ‘’” rather than !=Fixed: Spaces weren’t allowed in the installation location.1.3.5 (2011-08-06)Fixed: Spaces weren’t allowed in environment variables.Fixed: Added missing option reference documentation.1.3.4 (2011-01-18)Fixed a bug in location book-keeping that caused shared builds to be deleted
from disk when a part didn’t need them anymore. (#695977)Made tests pass with both zc.buildout 1.4 and 1.5, lifted the upper version
bound on zc.buildout. (#695732)1.3.3 (2010-11-10)Remove the temporary build directory when cmmi succeeds.Specify that the zc.buildout version be <1.5.0b1, as the recipe is
currently not compatible with zc.buildout 1.5.1.3.2 (2010-08-09)Remove the build directory for a shared build when the source cannot be
downloaded.Declared a test dependency on zope.testing.1.3.1 (2009-09-10)Declare dependency on zc.buildout 1.4 or later. This dependency was introduced
in version 1.3.1.3 (2009-09-03)Use zc.buildout’s download API. As this allows MD5 checking, added the
md5sum and patch-md5sum options.Added options for changing the name of the configure script and
overriding the--prefixparameter.Moved the core “configure; make; make install” command sequence to a
method that can be overridden in other recipes, to support packages
whose installation process is slightly different.1.2.1 (2009-08-12)Bug fix: keep track of reused shared builds.1.2.0 (2009-05-18)Enabled using a shared directory for completed builds.1.1.6 (2009-03-17)Moved ‘zc’ package from root of checkout into ‘src’, to prevent testrunner
from finding eggs installed locally by buildout.Removed deprecations under Python 2.6.1.1.5 (2008-11-07)Added to the README.txt file a link to the SVN repository, so that Setuptools
can automatically find the development version when asked to install the
“-dev” version of zc.recipe.cmmi.Applied fix for bug #261367 i.e. changed open() of file being downloaded to
binary, so that errors like the following no longer occur under Windows.uncompress = self.decompress.decompress(buf)
error: Error -3 while decompressing: invalid distance too far back1.1.4 (2008-06-25)Add support to autogen configure files.1.1.3 (2008-06-03)Add support for updating the environment.1.1.2 (2008-02-28)Check if thelocationfolder exists before creating it.After 1.1.0Added support for patches to be downloaded from a url rather than only using
patches on the filesystem1.1.0Added support for:download-cache: downloaded files are cached in the ‘cmmi’ subdirectory of
the cache cache keys are hashes of the url that the file was downloaded from
cache information recorded in the cache.ini file within each directoryoffline mode: cmmi will not go online if the package is not in the cachevariable location: build files other than in the parts directory if requiredadditional logging/output1.0.2 (2007-06-03)Added support for patches.Tests fixed (buildout’s output changed)1.0.1 (2006-11-22)Added missing zip_safe flag.1.0 (2006-11-22)Initial release.Detailed DocumentationWe have an archive with a demo foo tar ball and publish it by http in order
to see offline effects:>>> ls(distros)
- bar.tgz
- baz.tgz
- foo.tgz>>> distros_url = start_server(distros)Let’s update a sample buildout to installs it:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... """ % distros_url)We used the url option to specify the location of the archive.If we run the buildout, the configure script in the archive is run.
It creates a make file which is also run:>>> print(system('bin/buildout').strip())
Installing foo.
foo: Downloading http://localhost/foo.tgz
foo: Unpacking and configuring
configuring foo --prefix=/sample-buildout/parts/foo
echo building foo
building foo
echo installing foo
installing fooThe recipe also creates the parts directory:>>> import os.path
>>> os.path.isdir(join(sample_buildout, "parts", "foo"))
TrueIf we run the buildout again, the update method will be called, which
does nothing:>>> print(system('bin/buildout').strip())
Updating foo.You can supply extra configure options:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... extra_options = -a -b c
... """ % distros_url)>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost/foo.tgz
foo: Unpacking and configuring
configuring foo --prefix=/sample-buildout/parts/foo -a -b c
echo building foo
building foo
echo installing foo
installing fooThe recipe sets the location option, which can be read by other
recipes, to the location where the part is installed:>>> cat('.installed.cfg')
[buildout]
installed_develop_eggs =
parts = foo
<BLANKLINE>
[foo]
...
location = /sample_buildout/parts/foo
...It may be necessary to set some environment variables when running configure
or make. This can be done by adding an environment statement:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... environment =
... CFLAGS=-I/usr/lib/postgresql7.4/include
... """ % distros_url)>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost/foo.tgz
foo: Unpacking and configuring
foo: Updating environment: CFLAGS=-I/usr/lib/postgresql7.4/include
configuring foo --prefix=/sample_buildout/parts/foo
echo building foo
building foo
echo installing foo
installing fooSometimes it’s necessary to patch the sources before building a package.
You can specify the name of the patch to apply and (optional) patch options:First of all let’s write a patchfile:>>> import sys
>>> mkdir('patches')
>>> write('patches/config.patch',
... """--- configure
... +++ /dev/null
... @@ -1,13 +1,13 @@
... #!%s
... import sys
... -print("configuring foo " + ' '.join(sys.argv[1:]))
... +print("configuring foo patched " + ' '.join(sys.argv[1:]))
...
... Makefile_template = '''
... all:
... -\techo building foo
... +\techo building foo patched
...
... install:
... -\techo installing foo
... +\techo installing foo patched
... '''
...
... with open('Makefile', 'w') as f:
... _ = f.write(Makefile_template)
...
... """ % sys.executable)Now let’s create a buildout.cfg file. Note: If no patch option is beeing
passed, -p0 is appended by default.>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... patch = ${buildout:directory}/patches/config.patch
... patch_options = -p0
... """ % distros_url)>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost/foo.tgz
foo: Unpacking and configuring
patching file configure
...
configuring foo patched --prefix=/sample_buildout/parts/foo
echo building foo patched
building foo patched
echo installing foo patched
installing foo patchedIt is possible to autogenerate the configure files:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %s/bar.tgz
... autogen = autogen.sh
... """ % distros_url)>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost//bar.tgz
foo: Unpacking and configuring
foo: auto generating configure files
configuring foo --prefix=/sample_buildout/parts/foo
echo building foo
building foo
echo installing foo
installing fooIt is also possible to support configure commands other than “./configure”:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %s/baz.tgz
... source-directory-contains = configure.py
... configure-command = ./configure.py
... configure-options =
... --bindir=bin
... """ % distros_url)>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost//baz.tgz
foo: Unpacking and configuring
configuring foo --bindir=bin
echo building foo
building foo
echo installing foo
installing fooWhen downloading a source archive or a patch, we can optionally make sure of
its authenticity by supplying an MD5 checksum that must be matched. If it
matches, we’ll not be bothered with the check by buildout’s output:>>> from hashlib import md5
>>> with open(join(distros, 'foo.tgz'), 'rb') as f:
... foo_md5sum = md5(f.read()).hexdigest()>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... md5sum = %s
... """ % (distros_url, foo_md5sum))>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost/foo.tgz
foo: Unpacking and configuring
configuring foo --prefix=/sample_buildout/parts/foo
echo building foo
building foo
echo installing foo
installing fooBut if the archive doesn’t match the checksum, the recipe refuses to install:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sbar.tgz
... md5sum = %s
... patch = ${buildout:directory}/patches/config.patch
... """ % (distros_url, foo_md5sum))>>> print(system('bin/buildout').strip())
Uninstalling foo.
Installing foo.
foo: Downloading http://localhost:20617/bar.tgz
While:
Installing foo.
Error: MD5 checksum mismatch downloading 'http://localhost/bar.tgz'Similarly, a checksum mismatch for the patch will cause the buildout run to be
aborted:>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... patch = ${buildout:directory}/patches/config.patch
... patch-md5sum = %s
... """ % (distros_url, foo_md5sum))>>> print(system('bin/buildout').strip())
Installing foo.
foo: Downloading http://localhost:21669/foo.tgz
foo: Unpacking and configuring
While:
Installing foo.
Error: MD5 checksum mismatch for local resource at '/.../sample-buildout/patches/config.patch'.>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... patch = ${buildout:directory}/patches/config.patch
... """ % (distros_url))If the build fails, the temporary directory where the tarball was unpacked
is logged to stdout, and left intact for debugging purposes.>>> write('patches/config.patch', "dgdgdfgdfg")>>> res = system('bin/buildout')
>>> print(res)
Installing foo.
foo: Downloading http://localhost/foo.tgz
foo: Unpacking and configuring
patch unexpectedly ends in middle of line
foo: cmmi failed: /.../...buildout-foo
patch: **** Only garbage was found in the patch input.
While:
Installing foo.
<BLANKLINE>
An internal error occurred due to a bug in either zc.buildout or in a
recipe being used:
...
subprocess.CalledProcessError: Command 'patch -p0 < ...' returned non-zero exit status ...
<BLANKLINE>>>> import re
>>> import os.path
>>> import shutil
>>> path = re.search('foo: cmmi failed: (.*)', res).group(1)
>>> os.path.exists(path)
True
>>> shutil.rmtree(path)After a successful build, such temporary directories are removed.>>> import glob
>>> import tempfile>>> old_tempdir = tempfile.gettempdir()
>>> tempdir = tempfile.tempdir = tempfile.mkdtemp(suffix='.buildout.build')
>>> dirs = glob.glob(os.path.join(tempdir, '*buildout-foo'))>>> write('buildout.cfg',
... """
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.cmmi
... url = %sfoo.tgz
... """ % (distros_url,))>>> print(system("bin/buildout"))
Installing foo.
foo: Downloading http://localhost:21445/foo.tgz
foo: Unpacking and configuring
configuring foo --prefix=/sample_buildout/parts/foo
echo building foo
building foo
echo installing foo
installing foo
<BLANKLINE>>>> new_dirs = glob.glob(os.path.join(tempdir, '*buildout-foo'))
>>> len(dirs) == len(new_dirs) == 0
True
>>> tempfile.tempdir = old_tempdirDownload CacheThe recipe supports use of a download cache in the same way
as zc.buildout. See downloadcache.txt for details |
zc.recipe.deployment | ContentsChanges1.3.0 (2015-11-11)1.1.0 (2013-11-04)1.0.0 (2013-04-24)0.10.2 (2013-04-10)0.10.1 (2013-04-10)0.10.0 (2013-03-28)0.9.0 (2011-11-21)0.8.0 (2010-05-18)0.7.1 (2010-03-05)0.7.0 (2010-02-01)0.6 (2008-02-01)0.5 (Mar 23, 2007)0.4 (Mar 22, 2007)Features AddedBugs Fixed0.3 (Feb 14, 2007)0.2.1 (Feb 13, 2007)0.2 (Feb 7, 2007)Detailed DocumentationDeployment NameConfiguration filesRunning a command when a configuration file changesCron supportSharedConfigEdgecasesDownloadThe zc.recipe.deployment recipe provides support for deploying
applications with multiple processes on Unix systems. (Perhaps support
for other systems will be added later.) It creates directories to hold
application instance configuration, log and run-time files. It also
sets or reads options that can be read by other programs to find out
where to place files:cache-directoryThe name of the directory where application instances should write
cached copies of replacable data. This defaults to /var/cache/NAME,
where NAME is the deployment name.crontab-directoryThe name of the directory in which cron jobs should be placed.
This defaults to /etc/cron.d.etc-directoryThe name of the directory where configuration files should be
placed. This defaults to /etc/NAME, where NAME is the deployment
name.var-prefixThe path of the directory where configuration should be stored for
all applications. This defaults to /etc.lib-directoryThe name of the directory where application instances should write
valuable data. This defaults to /var/lib/NAME, where NAME is the
deployment name.log-directoryThe name of the directory where application instances should write
their log files. This defaults to /var/log/NAME, where NAME is the
deployment name.logrotate-directoryThe name of the directory where logrotate configuration files
should be placed, typically, /etc/logrotate.d.run-directoryThe name of the directory where application instances should put
their run-time files such as pid files and inter-process
communication socket files. This defaults to /var/run/NAME, where
NAME is the deployment name.rc-directoryThe name of the directory where run-control scripts should be
installed. This defaults to /etc/init.d.var-prefixThe path of the directory where data should be stored for all
applications. This defaults to /var.Directories traditionally placed in the /var hierarchy are created in
such a way that the directories are owned by the user specified in theuseroption and are writable by the user and the user’s group.
Directories usually found in the /etc hierarchy are created owned by the
user specified by theetc-usersetting (default to ‘root’) with the
same permissionsA system-wide configuration file, zc.recipe.deployment.cfg, located in
theetc-prefixdirectory, can be used to specify thevar-prefixsetting. The file uses the Python-standard ConfigParser syntax:[deployment]
var-prefix = /mnt/fatdiskNote that the section name is not related to the name of the deployment
parts being built; this is a system-wide setting not specific to any
deployment. This is useful to identify very large partitions where
control over /var itself is difficult to achieve.ChangesPython 3 support.1.3.0 (2015-11-11)Added anon-changeoption to the configuration recipe to run a
command when a configuration file changes.1.1.0 (2013-11-04)Do not touch an existing configuration file if the content hasn’t
changed.1.0.0 (2013-04-24)Added anameoption to theconfigurationrecipe to allow
explicit control of generated file paths.0.10.2 (2013-04-10)Fix packaging bug.0.10.1 (2013-04-10)Fix for 0.9 -> 0.10 .installed.cfg migration0.10.0 (2013-03-28)Addetc-prefixandvar-prefixto specify new locations of
these entire trees. Final versions of these paths are exported.Previously undocumented & untestedetc,logandrunsettings are deprecated. Warnings are logged if their values are
used.Addcache-directoryandlib-directoryto the set of output
directories.Add system-wide configuration, allowing locations of the “root”
directories to be specified for an entire machine.Allow*-directoryoptions (e.g.log-directory) to be
overridden by configuration data.0.9.0 (2011-11-21)Fixed test dependencies.Using Python’sdoctestmodule instead of deprecatedzope.testing.doctest.Added a directory option for configuration to override default etc
directory.0.8.0 (2010-05-18)Features AddedAdded recipe for updating configuration files that may shared by
multiple applications.0.7.1 (2010-03-05)Bugs fixedFixed a serious bug that cause buildouts to fail when using new
versions of the deployment recipe with older buildouts.Made uninstall more tolerant of directories it’s about to delete
already being deleted.0.7.0 (2010-02-01)Features AddedYou can now specify a user for crontab entries that is different than
a deployment user.0.6 (2008-02-01)Features AddedAdded the ability to specify a name independent of the section name.
Also, provide a name option for use by recipes.Important note to recipe authors: Recipes should use the deployment
name option rather than the deployment name when computing names of
generated files.0.5 (Mar 23, 2007)Features AddedAdded recipe for generating crontab files in /etc/cron.d.0.4 (Mar 22, 2007)Features AddedAdded setting for the logrotate configuration directories.Bugs FixedThe documentation gave the wrong name for the crontab-directory option.0.3 (Feb 14, 2007)Features AddedAdded a configuration recipe for creating configuration files.0.2.1 (Feb 13, 2007)Fixed bug in setup file.0.2 (Feb 7, 2007)Bugs FixedNon-empty log and run directories were deleated in un- and
re-install.Detailed DocumentationUsing the deployment recipe is pretty simple. Just specify a
deployment name, specified via the part name, and a deployment user.Let’s add a deployment to a sample buildout:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo
...
... [foo]
... prefix = %s
... recipe = zc.recipe.deployment
... user = %s
... etc-user = %s
... ''' % (sample_buildout, user, user))>>> from six import print_
>>> print_(system(join('bin', 'buildout')), end='')
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/foo',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'Note that we are providing a prefix and an etc-user here. These options
default to ‘/’ and ‘root’, respectively.Now we can see that directories named foo in PREFIX/etc, PREFIX/var/log and
PREFIX/var/run have been created:>>> import os
>>> print_(ls(os.path.join(sample_buildout, 'etc/foo')))
drwxr-xr-x USER GROUP PREFIX/etc/foo>>> print_(ls(os.path.join(sample_buildout, 'var/cache/foo')))
drwxr-xr-x USER GROUP PREFIX/var/cache/foo>>> print_(ls(os.path.join(sample_buildout, 'var/lib/foo')))
drwxr-xr-x USER GROUP PREFIX/var/lib/foo>>> print_(ls(os.path.join(sample_buildout, 'var/log/foo')))
drwxr-xr-x USER GROUP PREFIX/var/log/foo>>> print_(ls(os.path.join(sample_buildout, 'var/run/foo')))
drwxr-x--- USER GROUP PREFIX/var/run/fooBy looking at .installed.cfg, we can see the options available for use
by other recipes:>>> cat('.installed.cfg') # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
[buildout]
...
[foo]
__buildout_installed__ =
...
cache-directory = PREFIX/var/cache/foo
crontab-directory = PREFIX/etc/cron.d
etc-directory = PREFIX/etc/foo
etc-prefix = PREFIX/etc
etc-user = USER
lib-directory = PREFIX/var/lib/foo
log-directory = PREFIX/var/log/foo
logrotate-directory = PREFIX/etc/logrotate.d
name = foo
prefix = PREFIX
rc-directory = PREFIX/etc/init.d
recipe = zc.recipe.deployment
run-directory = PREFIX/var/run/foo
user = USER
var-prefix = PREFIX/varIf we uninstall, then the directories are removed.>>> print_(system(join('bin', 'buildout')+' buildout:parts='), end='')
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing 'PREFIX/etc/foo'
zc.recipe.deployment: Removing 'PREFIX/etc/cron.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/init.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/logrotate.d'.
zc.recipe.deployment: Removing 'PREFIX/var/cache/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/lib/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/log/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/run/foo'.>>> import os
>>> os.path.exists(os.path.join(sample_buildout, 'etc/foo'))
False
>>> os.path.exists(os.path.join(sample_buildout, 'var/cache/foo'))
False
>>> os.path.exists(os.path.join(sample_buildout, 'var/lib/foo'))
False
>>> os.path.exists(os.path.join(sample_buildout, 'var/log/foo'))
False
>>> os.path.exists(os.path.join(sample_buildout, 'var/run/foo'))
FalseThe cache, lib, log and run directories are only removed if they are empty.
To see that, we’ll put a file in each of the directories created:>>> print_(system(join('bin', 'buildout')), end='')
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/foo',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'>>> write(os.path.join(sample_buildout, 'etc/foo/x'), '')
>>> write(os.path.join(sample_buildout, 'var/cache/foo/x'), '')
>>> write(os.path.join(sample_buildout, 'var/lib/foo/x'), '')
>>> write(os.path.join(sample_buildout, 'var/log/foo/x'), '')
>>> write(os.path.join(sample_buildout, 'var/run/foo/x'), '')And then uninstall:>>> print_(system(join('bin', 'buildout')+' buildout:parts='), end='')
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing 'PREFIX/etc/foo'
zc.recipe.deployment: Removing 'PREFIX/etc/cron.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/init.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/logrotate.d'.
zc.recipe.deployment: Can't remove non-empty directory 'PREFIX/var/cache/foo'.
zc.recipe.deployment: Can't remove non-empty directory 'PREFIX/var/lib/foo'.
zc.recipe.deployment: Can't remove non-empty directory 'PREFIX/var/log/foo'.
zc.recipe.deployment: Can't remove non-empty directory 'PREFIX/var/run/foo'.>>> os.path.exists(os.path.join(sample_buildout, 'etc/foo'))
False>>> print_(ls(os.path.join(sample_buildout, 'var/cache/foo')))
drwxr-xr-x USER GROUP PREFIX/var/cache/foo>>> print_(ls(os.path.join(sample_buildout, 'var/lib/foo')))
drwxr-xr-x USER GROUP PREFIX/var/lib/foo>>> print_(ls(os.path.join(sample_buildout, 'var/log/foo')))
drwxr-xr-x USER GROUP PREFIX/var/log/foo>>> print_(ls(os.path.join(sample_buildout, 'var/run/foo')))
drwxr-x--- USER GROUP PREFIX/var/run/fooHere we see that the var and run directories are kept. The etc
directory is discarded because only buildout recipes should write to
it and all of its data are expendible.If we reinstall, remove the files, and uninstall, then the directories
are removed:>>> print_(system(join('bin', 'buildout')), end='')
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/var/cache/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/var/lib/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/var/log/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/var/run/foo',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'>>> os.remove(os.path.join(sample_buildout, 'var/cache/foo/x'))
>>> os.remove(os.path.join(sample_buildout, 'var/lib/foo/x'))
>>> os.remove(os.path.join(sample_buildout, 'var/log/foo/x'))
>>> os.remove(os.path.join(sample_buildout, 'var/run/foo/x'))>>> print_(system(join('bin', 'buildout')+' buildout:parts='), end='')
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing 'PREFIX/etc/foo'
zc.recipe.deployment: Removing 'PREFIX/etc/cron.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/init.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/logrotate.d'.
zc.recipe.deployment: Removing 'PREFIX/var/cache/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/lib/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/log/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/run/foo'.>>> os.path.exists('' + os.path.join(sample_buildout, 'PREFIX/etc/foo'))
False
>>> os.path.exists('' + os.path.join(sample_buildout, 'PREFIX/var/cache/foo'))
False
>>> os.path.exists('' + os.path.join(sample_buildout, 'PREFIX/var/lib/foo'))
False
>>> os.path.exists('' + os.path.join(sample_buildout, 'PREFIX/var/log/foo'))
False
>>> os.path.exists('' + os.path.join(sample_buildout, 'PREFIX/var/run/foo'))
FalsePrior to zc.recipe.deployment 0.10.0, some directories (eg., cache-directory,
lib-directory) were not managed by zc.recipe.deployment. So on uninstall, we
can expect any nonexistent directory keys to be silently ignored.>>> _ = system(join('bin', 'buildout')), # doctest: +NORMALIZE_WHITESPACE
>>> new_installed_contents = ""
>>> with open(
... os.path.join(sample_buildout, ".installed.cfg")) as fi:
... for line in fi.readlines():
... if (not line.startswith("cache-directory = ") and
... not line.startswith("lib-directory = ")):
... new_installed_contents += line
>>> with open(
... os.path.join(sample_buildout, ".installed.cfg"), 'w') as fi:
... _ = fi.write(new_installed_contents)
>>> print_(system(join('bin', 'buildout')+' buildout:parts='), end='')
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing '/tmp/tmpcokpi_buildoutSetUp/_TEST_/sample-buildout/etc/foo'
zc.recipe.deployment: Removing '/tmp/tmpcokpi_buildoutSetUp/_TEST_/sample-buildout/etc/cron.d'.
zc.recipe.deployment: Removing '/tmp/tmpcokpi_buildoutSetUp/_TEST_/sample-buildout/etc/init.d'.
zc.recipe.deployment: Removing '/tmp/tmpcokpi_buildoutSetUp/_TEST_/sample-buildout/etc/logrotate.d'.
zc.recipe.deployment: Removing '/tmp/tmpcokpi_buildoutSetUp/_TEST_/sample-buildout/var/log/foo'.
zc.recipe.deployment: Removing '/tmp/tmpcokpi_buildoutSetUp/_TEST_/sample-buildout/var/run/foo'.We’ll finish the cleanup our modified .installed.cfg missed.>>> os.removedirs(os.path.join(sample_buildout, 'var/cache/foo'))
>>> os.removedirs(os.path.join(sample_buildout, 'var/lib/foo'))Deployment NameThe deployment name is used for naming generated files and directories.
The deployment name defaults to the section name, but the deployment
name can be specified explicitly:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... name = bar
... user = %s
... etc-user = %s
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/bar',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/bar',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/bar',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/bar',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/bar',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'>>> print_(ls(os.path.join(sample_buildout, 'etc/bar')))
drwxr-xr-x USER GROUP PREFIX/etc/bar>>> print_(ls(os.path.join(sample_buildout, 'var/cache/bar')))
drwxr-xr-x USER GROUP PREFIX/var/cache/bar>>> print_(ls(os.path.join(sample_buildout, 'var/lib/bar')))
drwxr-xr-x USER GROUP PREFIX/var/lib/bar>>> print_(ls(os.path.join(sample_buildout, 'var/log/bar')))
drwxr-xr-x USER GROUP PREFIX/var/log/bar>>> print_(ls(os.path.join(sample_buildout, 'var/run/bar')))
drwxr-x--- USER GROUP PREFIX/var/run/bar>>> cat('.installed.cfg') # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
[buildout]
installed_develop_eggs =
parts = foo
<BLANKLINE>
[foo]
__buildout_installed__ =
...
cache-directory = PREFIX/var/cache/bar
crontab-directory = PREFIX/etc/cron.d
etc-directory = PREFIX/etc/bar
etc-prefix = PREFIX/etc
etc-user = USER
lib-directory = PREFIX/var/lib/bar
log-directory = PREFIX/var/log/bar
logrotate-directory = PREFIX/etc/logrotate.d
name = bar
prefix = PREFIX
rc-directory = PREFIX/etc/init.d
recipe = zc.recipe.deployment
run-directory = PREFIX/var/run/bar
user = USER
var-prefix = PREFIX/varNote (here and earlier) that the options include the name option,
which defaults to the part name. Other parts that use the deployment
name should use the name option rather than the part name.Configuration filesNormally, configuration files are created by specialized recipes.
Sometimes, it’s useful to specify configuration files in a buildout
configuration file. The zc.recipe.deployment:configuration recipe can be
used to do that.Let’s add a configuration file to our buildout:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... text = xxx
... yyy
... zzz
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing 'PREFIX/etc/bar'
zc.recipe.deployment: Removing 'PREFIX/etc/cron.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/init.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/logrotate.d'.
zc.recipe.deployment: Removing 'PREFIX/var/cache/bar'.
zc.recipe.deployment: Removing 'PREFIX/var/lib/bar'.
zc.recipe.deployment: Removing 'PREFIX/var/log/bar'.
zc.recipe.deployment: Removing 'PREFIX/var/run/bar'.
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/foo',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'
Installing x.cfg.By default, the configuration is installed as a part:>>> cat('parts', 'x.cfg')
xxx
yyy
zzzIf a deployment is specified, then the file is placed in the
deployment etc directory:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... text = xxx
... yyy
... zzz
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.
zc.recipe.deployment:
Updating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'>>> os.path.exists(join('parts', 'x.cfg'))
False>>> cat(os.path.join(sample_buildout, 'etc/foo/x.cfg'))
xxx
yyy
zzzIf a directory is specified, then the file is placed in the directory.>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... text = xxx
... yyy
... zzz
... directory = etc/foobar
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.
zc.recipe.deployment:
Creating 'PREFIX/etc/foobar',
mode 755, user 'USER', group 'GROUP'>>> os.path.exists(join('parts', 'x.cfg'))
False
>>> os.path.exists(join(sample_buildout, 'etc/foo/x.cfg'))
False>>> cat(os.path.join(sample_buildout, 'etc/foobar/x.cfg'))
xxx
yyy
zzzA directory option works only with a deployment option.>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... text = xxx
... yyy
... zzz
... directory = etc/foobar
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.>>> os.path.exists(join('parts', 'x.cfg'))
True
>>> os.path.exists(join(sample_buildout, 'etc/foobar/x.cfg'))
False>>> cat('parts', 'x.cfg')
xxx
yyy
zzzWe can read data from a file rather than specifying in the
configuration:>>> write('x.in', '1\n2\n3\n')>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... file = x.in
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.
zc.recipe.deployment:
Updating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'>>> cat(os.path.join(sample_buildout, 'etc/foo/x.cfg'))
1
2
3The recipe sets a location option that can be used by other recipes:>>> cat('.installed.cfg') # doctest: +ELLIPSIS
[buildout]
...
[x.cfg]
...
location = PREFIX/etc/foo/x.cfg
...By default, the part name is used as the file name. You can specify a
name explicitly using the name option:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... name = y.cfg
... text = this is y
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.
zc.recipe.deployment:
Updating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'>>> cat(os.path.join(sample_buildout, 'etc/foo/y.cfg'))
this is yIf name is given, only the file so named is created:>>> os.path.exists(os.path.join(sample_buildout, 'etc', 'foo', 'x.cfg'))
FalseThe name can be a path, or even absolute:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... name = ${buildout:directory}/y.cfg
... text = this is y also
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.
zc.recipe.deployment:
Updating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'>>> cat('y.cfg')
this is y alsoIf the content of the configuration file is unchanged between builds,
and the path hasn’t been changed, the file isn’t actually written in
subsequent builds. This is helpful if processes that use the file watch
for changes.>>> mod_time = os.stat('y.cfg').st_mtime>>> print_(system(join('bin', 'buildout')), end='')
Updating foo.
Updating x.cfg.
zc.recipe.deployment:
Updating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'>>> os.stat('y.cfg').st_mtime == mod_time
TrueRunning a command when a configuration file changesOften, when working with configuration files, you’ll need to restart
processes when configuration files change. You can specify anon-changeoption that takes a command to run whenever a
configuration file changes:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo x.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [x.cfg]
... recipe = zc.recipe.deployment:configuration
... name = ${buildout:directory}/y.cfg
... text = this is y
... deployment = foo
... on-change = echo /etc/init.d/x start
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling x.cfg.
Updating foo.
Installing x.cfg.
zc.recipe.deployment:
Updating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
/etc/init.d/x startCron supportThe crontab recipe provides support for creating crontab files. It
uses a times option to specify times to run the command and a command
option containing the command.>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo cron
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [cron]
... recipe = zc.recipe.deployment:crontab
... times = 30 23 * * *
... command = echo hello world!
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Updating foo.
Installing cron.This example creates PREFIX/etc/cron.d/foo-cron>>> open(os.path.join(sample_buildout, 'etc/cron.d/foo-cron')).read()
'30 23 * * *\tUSER\techo hello world!\n'The crontab recipe gets its user from the buildout’s deployment by default,
but it doesn’t have to.>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo cron
...
... [foo]
... recipe = zc.recipe.deployment
... name = bar
... prefix = %s
... user = %s
... etc-user = %s
...
... [cron]
... recipe = zc.recipe.deployment:crontab
... times = 30 23 * * *
... user = bob
... command = echo hello world!
... deployment = foo
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling cron.
Updating foo.
Installing cron.>>> open('etc/cron.d/bar-cron').read()
'30 23 * * *\tbob\techo hello world!\n'SharedConfigThis recipe can be used to update configuration files that are shared by
multiple applications. The absolute path of the file must be specified.
Also, the configuration files must accept comments that start with “#”.Like the configuration recipe, the content to add in the configuration file can
be provided using the “text” or the “file” option.First let’s create a file that will be used as the shared configuration file.>>> _ = open('y.cfg', 'w').write(
... '''Some
... existing
... configuration
... ''')We now create our buildout configuration and use the “sharedconfig” recipe and
run buildout.>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo y.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [y.cfg]
... recipe = zc.recipe.deployment:sharedconfig
... path = y.cfg
... deployment = foo
... text = xxx
... yyy
... zzz
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/foo',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Updating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'
Installing y.cfg.>>> print_(open('y.cfg', 'r').read())
Some
existing
configuration
<BLANKLINE>
#[foo_y.cfg DO NOT MODIFY LINES FROM HERE#
xxx
yyy
zzz
#TILL HERE foo_y.cfg]#
<BLANKLINE>Running buildout again without modifying the configuration leaves the file the
same.>>> print_(system(join('bin', 'buildout')), end='')
Updating foo.
Updating y.cfg.>>> print_(open('y.cfg', 'r').read())
Some
existing
configuration
<BLANKLINE>
#[foo_y.cfg DO NOT MODIFY LINES FROM HERE#
xxx
yyy
zzz
#TILL HERE foo_y.cfg]#
<BLANKLINE>If we add some more lines to the file>>> _ = open('y.cfg', 'a').write(
... '''Some
... additional
... configuration
... ''')and run buildout again, but this time after modifying the configuration for
“y.cfg”, the sections will be moved to the end of the file.>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo y.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [y.cfg]
... recipe = zc.recipe.deployment:sharedconfig
... path = y.cfg
... deployment = foo
... text = 111
... 222
... 333
... ''' % (sample_buildout, user, user))>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling y.cfg.
Running uninstall recipe.
Updating foo.
Installing y.cfg.>>> print_(open('y.cfg', 'r').read())
Some
existing
configuration
Some
additional
configuration
<BLANKLINE>
#[foo_y.cfg DO NOT MODIFY LINES FROM HERE#
111
222
333
#TILL HERE foo_y.cfg]#
<BLANKLINE>The text to append to the shared configuration file can also be provided via a
file.>>> write('x.cfg', '''
... [foo]
... a = 1
... b = 2
...
... [log]
... c = 1
... ''')>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo y.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [y.cfg]
... recipe = zc.recipe.deployment:sharedconfig
... path = %s/etc/z.cfg
... deployment = foo
... file = x.cfg
... ''' % (sample_buildout, user, user, sample_buildout))
>>> print_(system(join('bin', 'buildout')), end='')
While:
Installing.
Getting section y.cfg.
Initializing section y.cfg.
Error: Path 'PREFIX/etc/z.cfg' does not existOops. The path of the configuration file must exist. Let’s create one.>>> write(join(sample_buildout, 'etc', 'z.cfg'), '')
>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling y.cfg.
Running uninstall recipe.
Updating foo.
Installing y.cfg.>>> print_(open(join(sample_buildout, 'etc', 'z.cfg'), 'r').read())
<BLANKLINE>
#[foo_y.cfg DO NOT MODIFY LINES FROM HERE#
<BLANKLINE>
[foo]
a = 1
b = 2
<BLANKLINE>
[log]
c = 1
<BLANKLINE>
#TILL HERE foo_y.cfg]#
<BLANKLINE>While uninstalling, only the lines that the recipe installed are removed.>>> print_(system(join('bin', 'buildout')+' buildout:parts='), end='')
Uninstalling y.cfg.
Running uninstall recipe.
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing 'PREFIX/etc/foo'
zc.recipe.deployment: Removing 'PREFIX/etc/cron.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/init.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/logrotate.d'.
zc.recipe.deployment: Removing 'PREFIX/var/cache/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/lib/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/log/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/run/foo'.But the files are not deleted.>>> os.path.exists('y.cfg')
True>>> print_(open('y.cfg', 'r').read())
Some
existing
configuration
Some
additional
configuration
<BLANKLINE>>>> os.path.exists(join(sample_buildout, 'etc', 'z.cfg'))
True>>> print_(open(join(sample_buildout, 'etc', 'z.cfg'), 'r').read())
<BLANKLINE>EdgecasesThe SharedConfig recipe checks to see if the current data in the file
ends with a new line. If it doesn’t exist it adds one. This is in
addition to the blank line the recipe adds before the section to enhance
readability.>>> _ = open('anotherconfig.cfg', 'w').write('one')
>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo y.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [y.cfg]
... recipe = zc.recipe.deployment:sharedconfig
... path = anotherconfig.cfg
... deployment = foo
... text = I predict that there will be a blank line above this.
... ''' % (sample_buildout, user, user))
>>> print_(system(join('bin', 'buildout')), end='')
Installing foo.
zc.recipe.deployment:
Creating 'PREFIX/etc/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/foo',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/foo',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'
Installing y.cfg.>>> print_(open('anotherconfig.cfg').read())
one
<BLANKLINE>
#[foo_y.cfg DO NOT MODIFY LINES FROM HERE#
I predict that there will be a blank line above this.
#TILL HERE foo_y.cfg]#
<BLANKLINE>But the recipe doesn’t add a new line if there was one already at the end.>>> _ = open('anotherconfig.cfg', 'w').write('ends with a new line\n')
>>> print_(open('anotherconfig.cfg').read())
ends with a new line
<BLANKLINE>We modify the buildout configuration so that “install” is invoked again:>>> write('buildout.cfg',
... '''
... [buildout]
... parts = foo y.cfg
...
... [foo]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [y.cfg]
... recipe = zc.recipe.deployment:sharedconfig
... path = anotherconfig.cfg
... deployment = foo
... text = there will still be only a single blank line above.
... ''' % (sample_buildout, user, user))
>>> print_(system(join('bin', 'buildout')), end='')
Uninstalling y.cfg.
Running uninstall recipe.
Updating foo.
Installing y.cfg.>>> print_(open('anotherconfig.cfg').read())
ends with a new line
<BLANKLINE>
#[foo_y.cfg DO NOT MODIFY LINES FROM HERE#
there will still be only a single blank line above.
#TILL HERE foo_y.cfg]#
<BLANKLINE>If we uninstall the file, the data will be the same as “original_data”:>>> print_(system(join('bin', 'buildout')+' buildout:parts='), end='')
Uninstalling y.cfg.
Running uninstall recipe.
Uninstalling foo.
Running uninstall recipe.
zc.recipe.deployment: Removing 'PREFIX/etc/foo'
zc.recipe.deployment: Removing 'PREFIX/etc/cron.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/init.d'.
zc.recipe.deployment: Removing 'PREFIX/etc/logrotate.d'.
zc.recipe.deployment: Removing 'PREFIX/var/cache/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/lib/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/log/foo'.
zc.recipe.deployment: Removing 'PREFIX/var/run/foo'.>>> print_(open('anotherconfig.cfg').read())
ends with a new line
<BLANKLINE>Download |
zc.recipe.egg | ContentsChange History2.0.7 (2018-07-02)2.0.6 (2018-07-02)2.0.5 (2017-12-04)2.0.4 (2017-08-17)2.0.3 (2015-10-02)2.0.2 (2015-07-01)2.0.1 (2013-09-05)2.0.0 (2013-04-02)2.0.0a3 (2012-11-19)2.0.0a2 (2012-05-03)1.3.2 (2010-08-23)1.3.1 (2010-08-23)1.3.0 (2010-08-23)1.2.3b1 (2010-04-29)1.2.2 (2009-03-18)1.2.1 (2009-03-18)1.2.0 (2009-03-17)1.1.0 (2008-07-19)1.1.0b1 (2008-06-27)1.0.0 (2007-11-03)1.0.0b5 (2007-02-08)1.0.0b4 (2007-01-17)1.0.0b3 (2006-12-04)1.0.0b2 (2006-10-16)1.0.0b11.0.0a31.0.0a21.0.0a1Detailed DocumentationInstallation of distributions as eggsScript generationEgg updatingControlling script generationSpecifying extra script pathsRelative egg pathsSpecifying initialization code and argumentsSpecifying entry pointsGenerating all scriptsOffline modeCreating eggs with extensions needing custom build settingsUpdatingControlling the version usedControlling environment variablesControlling develop-egg generationEgg Recipe API for other RecipesWorking set cachingDownloadThe egg-installation recipe installs eggs into a buildout eggs
directory. It also generates scripts in a buildout bin directory with
egg paths baked into them.Change History2.0.7 (2018-07-02)For the 2.0.6 change, we require zc.buildout 2.12.0. Theinstall_requiresinsetup.pynow also says that.2.0.6 (2018-07-02)Added extra keyword argumentallow_unknown_extrasto support zc.buildout
2.12.0.2.0.5 (2017-12-04)Fixed #429: added sorting of working set by priority of different
type of paths (develop-eggs-directory, eggs-directory, other paths).2.0.4 (2017-08-17)Fixed #153: buildout should cache working set environments
[rafaelbco]2.0.3 (2015-10-02)Releasing zc.recipe.egg as a wheel in addition to only an sdist. No
functional changes.
[reinout]2.0.2 (2015-07-01)Fixed: Inzc.recipe.egg#customrecipe’srpathsupport, don’t
assume path elements are buildout-relative if they start with one of the
“special” tokens (e.g.,$ORIGIN). See:https://github.com/buildout/buildout/issues/225.
[tseaver]2.0.1 (2013-09-05)Accomodatedzc.buildoutswitch to post-mergesetuptools.2.0.0 (2013-04-02)Enabled ‘prefer-final’ option by default.2.0.0a3 (2012-11-19)Added support for Python 3.2 / 3.3.Added ‘MANIFEST.in’.Support non-entry-point-based scripts.Honor exit codes from scripts (https://bugs.launchpad.net/bugs/697913).2.0.0a2 (2012-05-03)Always unzip installed eggs.Switched from using ‘setuptools’ to ‘distribute’.Removed multi-python support.1.3.2 (2010-08-23)Bugfix for the change introduced in 1.3.1.1.3.1 (2010-08-23)Support recipes that are using zc.recipe.egg by passing in a dict, rather
than a zc.buildout.buildout.Options object as was expected/tested.1.3.0 (2010-08-23)Small further refactorings past 1.2.3b1 to be compatible with
zc.buildout 1.5.0.1.2.3b1 (2010-04-29)Refactored to be used with z3c.recipe.scripts and zc.buildout 1.5.0.
No new user-visible features.1.2.2 (2009-03-18)Fixed a dependency information. zc.buildout >1.2.0 is required.1.2.1 (2009-03-18)Refactored generation of relative egg paths to generate simpler code.1.2.0 (2009-03-17)Added thedependent-scriptsoption. When set totrue, scripts will
be generated for all required eggs in addition to the eggs named
specifically. This idea came from two forks of this recipe,repoze.recipe.eggandpylons_sandbox, but the option name is
spelled with a dash instead of underscore and it defaults tofalse.Added a relative-paths option. When true, egg paths in scripts are generated
relative to the script names.1.1.0 (2008-07-19)Refactored to work honor the new buildout-level unzip option.1.1.0b1 (2008-06-27)Addedenvironmentoption to custom extension building options.1.0.0 (2007-11-03)No code changes from last beta, just some small package meta-data
improvements.1.0.0b5 (2007-02-08)Feature ChangesAdded support for the buildout newest option.1.0.0b4 (2007-01-17)Feature ChangesAdded initialization and arguments options to the scripts recipe.Added an eggs recipe thatjustinstalls eggs.Advertized the scripts recipe for creating scripts.1.0.0b3 (2006-12-04)Feature ChangesAdded a develop recipe for creating develop eggs.This is useful to:Specify custom extension building options,Specify a version of Python to use, and toCause develop eggs to be created after other parts.The develop and build recipes now return the paths created, so that
created eggs or egg links are removed when a part is removed (or
changed).1.0.0b2 (2006-10-16)Updated to work with (not get a warning from) zc.buildout 1.0.0b10.1.0.0b1Updated to work with zc.buildout 1.0.0b3.1.0.0a3Extra path elements to be included in generated scripts can now be
set via the extra-paths option.No longer implicitly generate “py_” scripts for each egg. There is
now an interpreter option to generate a script that, when run
without arguments, launches the Python interactive interpreter with
the path set based on a parts eggs and extra paths. If this script
is run with the name of a Python script and arguments, then the
given script is run with the path set.You can now specify explicit entry points. This is useful for use
with packages that don’t declare their own entry points.Added Windows support.Now-longer implicitly generate “py_” scripts for each egg. You can
now generate a script for launching a Python interpreter or for
running scripts based on the eggs defined for an egg part.You can now specify custom entry points for packages that don’t
declare their entry points.You can now specify extra-paths to be included in generated scripts.1.0.0a2Added a custom recipe for building custom eggs using custom distutils
build_ext arguments.1.0.0a1Initial public versionDetailed DocumentationInstallation of distributions as eggsThe zc.recipe.egg:eggs recipe can be used to install various types if
distutils distributions as eggs. It takes a number of options:eggsA list of eggs to install given as one or more setuptools
requirement strings. Each string must be given on a separate
line.find-linksA list of URLs, files, or directories to search for distributions.indexThe URL of an index server, or almost any other valid URL. :)If not specified, the Python Package Index,http://cheeseshop.python.org/pypi, is used. You can specify an
alternate index with this option. If you use the links option and
if the links point to the needed distributions, then the index can
be anything and will be largely ignored. In the examples, here,
we’ll just point to an empty directory on our link server. This
will make our examples run a little bit faster.We have a link server that has a number of distributions:>>> print_(get(link_server), end='')
<html><body>
<a href="bigdemo-0.1-py2.3.egg">bigdemo-0.1-py2.3.egg</a><br>
<a href="demo-0.1-py2.3.egg">demo-0.1-py2.3.egg</a><br>
<a href="demo-0.2-py2.3.egg">demo-0.2-py2.3.egg</a><br>
<a href="demo-0.3-py2.3.egg">demo-0.3-py2.3.egg</a><br>
<a href="demo-0.4rc1-py2.3.egg">demo-0.4rc1-py2.3.egg</a><br>
<a href="demoneeded-1.0.zip">demoneeded-1.0.zip</a><br>
<a href="demoneeded-1.1.zip">demoneeded-1.1.zip</a><br>
<a href="demoneeded-1.2rc1.zip">demoneeded-1.2rc1.zip</a><br>
<a href="du_zipped-1.0-pyN.N.egg">du_zipped-1.0-pyN.N.egg</a><br>
<a href="extdemo-1.4.zip">extdemo-1.4.zip</a><br>
<a href="index/">index/</a><br>
<a href="mixedcase-0.5.zip">mixedcase-0.5.zip</a><br>
<a href="other-1.0-py2.3.egg">other-1.0-py2.3.egg</a><br>
</body></html>We have a sample buildout. Let’s update it’s configuration file to
install the demo package.>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg:eggs
... eggs = demo<0.3
... find-links = %(server)s
... index = %(server)s/index
... """ % dict(server=link_server))In this example, we limited ourselves to revisions before 0.3. We also
specified where to find distributions using the find-links option.Let’s run the buildout:>>> import os
>>> print_(system(buildout), end='')
Installing demo.
Getting distribution for 'demo<0.3'.
Got demo 0.2.
Getting distribution for 'demoneeded'.
Got demoneeded 1.1.Now, if we look at the buildout eggs directory:>>> ls(sample_buildout, 'eggs')
d demo-0.2-py2.3.egg
d demoneeded-1.1-py2.3.egg
- setuptools-0.7-py2.3.egg
d zc.buildout-1.0-py2.3.eggWe see that we got an egg for demo that met the requirement, as well
as the egg for demoneeded, which demo requires. (We also see an egg
link for the recipe in the develop-eggs directory. This egg link was
actually created as part of the sample buildout setup. Normally, when
using the recipe, you’ll get a regular egg installation.)Script generationThe demo egg defined a script, but we didn’t get one installed:>>> ls(sample_buildout, 'bin')
- buildoutIf we want scripts provided by eggs to be installed, we should use the
scripts recipe:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg:scripts
... eggs = demo<0.3
... find-links = %(server)s
... index = %(server)s/index
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/demo'.Now we also see the script defined by the demo script:>>> ls(sample_buildout, 'bin')
- buildout
- demoThe scripts recipe defines some additional options:entry-pointsA list of entry-point identifiers of the form:name=module:attrswhere name is a script name, module is a dotted name resolving to a
module name, and attrs is a dotted name resolving to a callable
object within a module.This option is useful when working with distributions that don’t
declare entry points, such as distributions not written to work
with setuptools.Examples can be seen in the section “Specifying entry points” below.scriptsControl which scripts are generated. The value should be a list of
zero or more tokens. Each token is either a name, or a name
followed by an ‘=’ and a new name. Only the named scripts are
generated. If no tokens are given, then script generation is
disabled. If the option isn’t given at all, then all scripts
defined by the named eggs will be generated.dependent-scriptsIf set to the string “true”, scripts will be generated for all
required eggs in addition to the eggs specifically named.interpreterThe name of a script to generate that allows access to a Python
interpreter that has the path set based on the eggs installed.extra-pathsExtra paths to include in a generated script.initializationSpecify some Python initialization code. This is very limited. In
particular, be aware that leading whitespace is stripped from the
code given.argumentsSpecify some arguments to be passed to entry points as Python source.relative-pathsIf set to true, then egg paths will be generated relative to the
script path. This allows a buildout to be moved without breaking
egg paths. This option can be set in either the script section or
in the buildout section.Let’s add an interpreter option:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... eggs = demo<0.3
... find-links = %(server)s
... index = %(server)s/index
... interpreter = py-demo
... """ % dict(server=link_server))Note that we omitted the entry point name from the recipe
specification. We were able to do this because the scripts recipe is
the default entry point for the zc.recipe.egg egg.>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/demo'.
Generated interpreter '/sample-buildout/bin/py-demo'.Now we also get a py-demo script for giving us a Python prompt with
the path for demo and any eggs it depends on included in sys.path.
This is useful for debugging and testing.>>> ls(sample_buildout, 'bin')
- buildout
- demo
- py-demoIf we run the demo script, it prints out some minimal data:>>> print_(system(join(sample_buildout, 'bin', 'demo')), end='')
2 1The value it prints out happens to be some values defined in the
modules installed.We can also run the py-demo script. Here we’ll just print_(out)
the bits if the path added to reflect the eggs:>>> print_(system(join(sample_buildout, 'bin', 'py-demo'),
... """import os, sys
... for p in sys.path:
... if 'demo' in p:
... _ = sys.stdout.write(os.path.basename(p)+'\\n')
...
... """).replace('>>> ', '').replace('... ', ''), end='')
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
demo-0.2-py2.4.egg
demoneeded-1.1-py2.4.egg...Egg updatingThe recipe normally gets the most recent distribution that satisfies the
specification. It won’t do this is the buildout is either in
non-newest mode or in offline mode. To see how this works, we’ll
remove the restriction on demo:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... """ % dict(server=link_server))and run the buildout in non-newest mode:>>> print_(system(buildout+' -N'), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/demo'.Note that we removed the eggs option, and the eggs defaulted to the
part name. Because we removed the eggs option, the demo was
reinstalled.We’ll also run the buildout in off-line mode:>>> print_(system(buildout+' -o'), end='')
Updating demo.We didn’t get an update for demo:>>> ls(sample_buildout, 'eggs')
d demo-0.2-py2.3.egg
d demoneeded-1.1-py2.3.egg
- setuptools-0.7-py2.3.egg
d zc.buildout-1.0-py2.3.eggIf we run the buildout on the default online and newest modes,
we’ll get an update for demo:>>> print_(system(buildout), end='')
Updating demo.
Getting distribution for 'demo'.
Got demo 0.3.
Generated script '/sample-buildout/bin/demo'.Then we’ll get a new demo egg:>>> ls(sample_buildout, 'eggs')
d demo-0.2-py2.3.egg
d demo-0.3-py2.3.egg
d demoneeded-1.1-py2.3.egg
- setuptools-0.7-py2.4.egg
d zc.buildout-1.0-py2.4.eggThe script is updated too:>>> print_(system(join(sample_buildout, 'bin', 'demo')), end='')
3 1Controlling script generationYou can control which scripts get generated using the scripts option.
For example, to suppress scripts, use the scripts option without any
arguments:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... scripts =
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.>>> ls(sample_buildout, 'bin')
- buildoutYou can also control the name used for scripts:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... scripts = demo=foo
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/foo'.>>> ls(sample_buildout, 'bin')
- buildout
- fooSpecifying extra script pathsIf we need to include extra paths in a script, we can use the
extra-paths option:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... scripts = demo=foo
... extra-paths =
... /foo/bar
... ${buildout:directory}/spam
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/foo'.Let’s look at the script that was generated:>>> cat(sample_buildout, 'bin', 'foo') # doctest: +NORMALIZE_WHITESPACE
#!/usr/local/bin/python2.7
<BLANKLINE>
import sys
sys.path[0:0] = [
'/sample-buildout/eggs/demo-0.3-py2.4.egg',
'/sample-buildout/eggs/demoneeded-1.1-py2.4.egg',
'/foo/bar',
'/sample-buildout/spam',
]
<BLANKLINE>
import eggrecipedemo
<BLANKLINE>
if __name__ == '__main__':
sys.exit(eggrecipedemo.main())Relative egg pathsIf the relative-paths option is specified with a true value, then
paths will be generated relative to the script. This is useful when
you want to be able to move a buildout directory around without
breaking scripts.>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... scripts = demo=foo
... relative-paths = true
... extra-paths =
... /foo/bar
... ${buildout:directory}/spam
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/foo'.Let’s look at the script that was generated:>>> cat(sample_buildout, 'bin', 'foo') # doctest: +NORMALIZE_WHITESPACE
#!/usr/local/bin/python2.7
<BLANKLINE>
import os
<BLANKLINE>
join = os.path.join
base = os.path.dirname(os.path.abspath(os.path.realpath(__file__)))
base = os.path.dirname(base)
<BLANKLINE>
import sys
sys.path[0:0] = [
join(base, 'eggs/demo-0.3-pyN.N.egg'),
join(base, 'eggs/demoneeded-1.1-pyN.N.egg'),
'/foo/bar',
join(base, 'spam'),
]
<BLANKLINE>
import eggrecipedemo
<BLANKLINE>
if __name__ == '__main__':
sys.exit(eggrecipedemo.main())You can specify relative paths in the buildout section, rather than in
each individual script section:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
... relative-paths = true
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... scripts = demo=foo
... extra-paths =
... /foo/bar
... ${buildout:directory}/spam
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/foo'.>>> cat(sample_buildout, 'bin', 'foo') # doctest: +NORMALIZE_WHITESPACE
#!/usr/local/bin/python2.7
<BLANKLINE>
import os
<BLANKLINE>
join = os.path.join
base = os.path.dirname(os.path.abspath(os.path.realpath(__file__)))
base = os.path.dirname(base)
<BLANKLINE>
import sys
sys.path[0:0] = [
join(base, 'eggs/demo-0.3-pyN.N.egg'),
join(base, 'eggs/demoneeded-1.1-pyN.N.egg'),
'/foo/bar',
join(base, 'spam'),
]
<BLANKLINE>
import eggrecipedemo
<BLANKLINE>
if __name__ == '__main__':
sys.exit(eggrecipedemo.main())Specifying initialization code and argumentsSometimes, we need to do more than just calling entry points. We can
use the initialization and arguments options to specify extra code
to be included in generated scripts:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... scripts = demo=foo
... extra-paths =
... /foo/bar
... ${buildout:directory}/spam
... initialization = a = (1, 2
... 3, 4)
... interpreter = py
... arguments = a, 2
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/foo'.
Generated interpreter '/sample-buildout/bin/py'.>>> cat(sample_buildout, 'bin', 'foo') # doctest: +NORMALIZE_WHITESPACE
#!/usr/local/bin/python2.7
<BLANKLINE>
import sys
sys.path[0:0] = [
'/sample-buildout/eggs/demo-0.3-py2.4.egg',
'/sample-buildout/eggs/demoneeded-1.1-py2.4.egg',
'/foo/bar',
'/sample-buildout/spam',
]
<BLANKLINE>
a = (1, 2
3, 4)
<BLANKLINE>
import eggrecipedemo
<BLANKLINE>
if __name__ == '__main__':
sys.exit(eggrecipedemo.main(a, 2))Here we see that the initialization code we specified was added after
setting the path. Note, as mentioned above, that leading whitespace
has been stripped. Similarly, the argument code we specified was
added in the entry point call (to main).Our interpreter also has the initialization code:>>> cat(sample_buildout, 'bin', 'py')
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
#!/usr/local/bin/python2.7
<BLANKLINE>
import sys
<BLANKLINE>
sys.path[0:0] = [
'/sample-buildout/eggs/demo-0.3-py3.3.egg',
'/sample-buildout/eggs/demoneeded-1.1-py3.3.egg',
'/foo/bar',
'/sample-buildout/spam',
]
<BLANKLINE>
a = (1, 2
3, 4)
<BLANKLINE>
<BLANKLINE>
_interactive = True
...Specifying entry pointsScripts can be generated for entry points declared explicitly. We can
declare entry points using the entry-points option:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
...
... [demo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... extra-paths =
... /foo/bar
... ${buildout:directory}/spam
... entry-points = alt=eggrecipedemo:alt other=foo.bar:a.b.c
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling demo.
Installing demo.
Generated script '/sample-buildout/bin/demo'.
Generated script '/sample-buildout/bin/alt'.
Generated script '/sample-buildout/bin/other'.>>> ls(sample_buildout, 'bin')
- alt
- buildout
- demo
- other>>> cat(sample_buildout, 'bin', 'other')
#!/usr/local/bin/python2.7
<BLANKLINE>
import sys
sys.path[0:0] = [
'/sample-buildout/eggs/demo-0.3-py2.4.egg',
'/sample-buildout/eggs/demoneeded-1.1-py2.4.egg',
'/foo/bar',
'/sample-buildout/spam',
]
<BLANKLINE>
import foo.bar
<BLANKLINE>
if __name__ == '__main__':
sys.exit(foo.bar.a.b.c())Generating all scriptsThebigdemopackage doesn’t have any scripts, but it requires thedemopackage, which does have a script. Specifydependent-scripts = trueto
generate all scripts in required packages:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = bigdemo
...
... [bigdemo]
... recipe = zc.recipe.egg
... find-links = %(server)s
... index = %(server)s/index
... dependent-scripts = true
... """ % dict(server=link_server))
>>> print_(system(buildout+' -N'), end='')
Uninstalling demo.
Installing bigdemo.
Getting distribution for 'bigdemo'.
Got bigdemo 0.1.
Generated script '/sample-buildout/bin/demo'.Offline modeIf the buildout offline option is set to “true”, then no attempt will
be made to contact an index server:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = demo
... offline = true
...
... [demo]
... recipe = zc.recipe.egg
... index = eek!
... scripts = demo=foo
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Uninstalling bigdemo.
Installing demo.
Generated script '/sample-buildout/bin/foo'.Creating eggs with extensions needing custom build settingsSometimes, It’s necessary to provide extra control over how an egg is
created. This is commonly true for eggs with extension modules that
need to access libraries or include files.The zc.recipe.egg:custom recipe can be used to define an egg with
custom build parameters. The currently defined parameters are:include-dirsA new-line separated list of directories to search for include
files.library-dirsA new-line separated list of directories to search for libraries
to link with.rpathA new-line separated list of directories to search for dynamic libraries
at run time.defineA comma-separated list of names of C preprocessor variables to
define.undefA comma-separated list of names of C preprocessor variables to
undefine.librariesThe name of an additional library to link with. Due to limitations
in distutils and despite the option name, only a single library
can be specified.link-objectsThe name of an link object to link against. Due to limitations
in distutils and despite the option name, only a single link object
can be specified.debugCompile/link with debugging informationforceForcibly build everything (ignore file timestamps)compilerSpecify the compiler typeswigThe path to the swig executableswig-cppMake SWIG create C++ files (default is C)swig-optsList of SWIG command line optionsIn addition, the following options can be used to specify the egg:eggAn specification for the egg to be created, to install given as a
setuptools requirement string. This defaults to the part name.find-linksA list of URLs, files, or directories to search for distributions.indexThe URL of an index server, or almost any other valid URL. :)If not specified, the Python Package Index,http://cheeseshop.python.org/pypi, is used. You can specify an
alternate index with this option. If you use the links option and
if the links point to the needed distributions, then the index can
be anything and will be largely ignored. In the examples, here,
we’ll just point to an empty directory on our link server. This
will make our examples run a little bit faster.environmentThe name of a section with additional environment variables. The
environment variables are set before the egg is built.To illustrate this, we’ll define a buildout that builds an egg for a
package that has a simple extension module:#include <Python.h>
#include <extdemo.h>
static PyMethodDef methods[] = {};
PyMODINIT_FUNC
initextdemo(void)
{
PyObject *m;
m = Py_InitModule3("extdemo", methods, "");
#ifdef TWO
PyModule_AddObject(m, "val", PyInt_FromLong(2));
#else
PyModule_AddObject(m, "val", PyInt_FromLong(EXTDEMO));
#endif
}The extension depends on a system-dependent include file, extdemo.h,
that defines a constant, EXTDEMO, that is exposed by the extension.The extension module is available as a source distribution,
extdemo-1.4.tar.gz, on a distribution server.We have a sample buildout that we’ll add an include directory to with
the necessary include file:>>> mkdir('include')
>>> write('include', 'extdemo.h',
... """
... #define EXTDEMO 42
... """)We’ll also update the buildout configuration file to define a part for
the egg:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... parts = extdemo
...
... [extdemo]
... recipe = zc.recipe.egg:custom
... find-links = %(server)s
... index = %(server)s/index
... include-dirs = include
...
... """ % dict(server=link_server))>>> print_(system(buildout), end='') # doctest: +ELLIPSIS
Installing extdemo...We got the zip_safe warning because the source distribution we used
wasn’t setuptools based and thus didn’t set the option.The egg is created in the develop-eggs directorynotthe eggs
directory because it depends on buildout-specific parameters and the
eggs directory can be shared across multiple buildouts.>>> ls(sample_buildout, 'develop-eggs')
d extdemo-1.4-py2.4-unix-i686.egg
- zc.recipe.egg.egg-linkNote that no scripts or dependencies are installed. To install
dependencies or scripts for a custom egg, define another part and use
the zc.recipe.egg recipe, listing the custom egg as one of the eggs to
be installed. The zc.recipe.egg recipe will use the installed egg.Let’s define a script that uses out ext demo:>>> mkdir('demo')
>>> write('demo', 'demo.py',
... """
... import extdemo, sys
... def print_(*args):
... sys.stdout.write(' '.join(map(str, args)) + '\\n')
... def main():
... print_(extdemo.val)
... """)>>> write('demo', 'setup.py',
... """
... from setuptools import setup
... setup(name='demo')
... """)>>> write('buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = extdemo demo
...
... [extdemo]
... recipe = zc.recipe.egg:custom
... find-links = %(server)s
... index = %(server)s/index
... include-dirs = include
...
... [demo]
... recipe = zc.recipe.egg
... eggs = demo
... extdemo
... entry-points = demo=demo:main
... """ % dict(server=link_server))>>> print_(system(buildout), end='')
Develop: '/sample-buildout/demo'
Updating extdemo.
Installing demo.
Generated script '/sample-buildout/bin/demo'...When we run the script, we’ll 42 printed:>>> print_(system(join('bin', 'demo')), end='')
42UpdatingThe custom recipe will normally check for new source distributions
that meet the given specification. This can be suppressed using the
buildout non-newest and offline modes. We’ll generate a new source
distribution for extdemo:>>> update_extdemo()If we run the buildout in non-newest or offline modes:>>> print_(system(buildout+' -N'), end='')
Develop: '/sample-buildout/demo'
Updating extdemo.
Updating demo.>>> print_(system(buildout+' -o'), end='')
Develop: '/sample-buildout/demo'
Updating extdemo.
Updating demo.We won’t get an update.>>> ls(sample_buildout, 'develop-eggs')
- demo.egg-link
d extdemo-1.4-py2.4-unix-i686.egg
- zc.recipe.egg.egg-linkBut if we run the buildout in the default on-line and newest modes, we
will. This time we also get the test-variable message again, because the new
version is imported:>>> print_(system(buildout), end='') # doctest: +ELLIPSIS
Develop: '/sample-buildout/demo'
Updating extdemo.
zip_safe flag not set; analyzing archive contents...
Updating demo.
...>>> ls(sample_buildout, 'develop-eggs')
- demo.egg-link
d extdemo-1.4-py2.4-linux-i686.egg
d extdemo-1.5-py2.4-linux-i686.egg
- zc.recipe.egg.egg-linkControlling the version usedWe can specify a specific version using the egg option:>>> write('buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = extdemo demo
...
... [extdemo]
... recipe = zc.recipe.egg:custom
... egg = extdemo ==1.4
... find-links = %(server)s
... index = %(server)s/index
... include-dirs = include
...
... [demo]
... recipe = zc.recipe.egg
... eggs = demo
... extdemo ==1.4
... entry-points = demo=demo:main
... """ % dict(server=link_server))>>> print_(system(buildout+' -D'), end='') # doctest: +ELLIPSIS
Develop: '/sample-buildout/demo'
...>>> ls(sample_buildout, 'develop-eggs')
- demo.egg-link
d extdemo-1.4-py2.4-linux-i686.egg
- zc.recipe.egg.egg-linkControlling environment variablesTo set additional environment variables, theenvironmentoption is used.Let’s create a recipe which prints out environment variables. We need this to
make sure the set environment variables are removed after the egg:custom
recipe was run.>>> mkdir(sample_buildout, 'recipes')
>>> write(sample_buildout, 'recipes', 'environ.py',
... """
... import logging, os, zc.buildout
...
... class Environ:
...
... def __init__(self, buildout, name, options):
... self.name = name
...
... def install(self):
... logging.getLogger(self.name).info(
... 'test-variable left over: %s' % (
... 'test-variable' in os.environ))
... return []
...
... def update(self):
... self.install()
... """)
>>> write(sample_buildout, 'recipes', 'setup.py',
... """
... from setuptools import setup
...
... setup(
... name = "recipes",
... entry_points = {'zc.buildout': ['environ = environ:Environ']},
... )
... """)Create our buildout:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = recipes
... parts = extdemo checkenv
...
... [extdemo-env]
... test-variable = foo
...
... [extdemo]
... recipe = zc.recipe.egg:custom
... find-links = %(server)s
... index = %(server)s/index
... include-dirs = include
... environment = extdemo-env
...
... [checkenv]
... recipe = recipes:environ
...
... """ % dict(server=link_server))
>>> print_(system(buildout), end='') # doctest: +ELLIPSIS
Develop: '/sample-buildout/recipes'
Uninstalling demo.
Uninstalling extdemo.
Installing extdemo.
Have environment test-variable: foo
zip_safe flag not set; analyzing archive contents...
Installing checkenv.
...The setup.py also printed out that we have set the environmenttest-variableto foo. After the buildout the variable is reset to its original value (i.e.
removed).When an environment variable has a value before zc.recipe.egg:custom is run,
the original value will be restored:>>> import os
>>> os.environ['test-variable'] = 'bar'
>>> print_(system(buildout), end='')
Develop: '/sample-buildout/recipes'
Updating extdemo.
Updating checkenv.
checkenv: test-variable left over: True>>> os.environ['test-variable']
'bar'Sometimes it is required to prepend or append to an existing environment
variable, for instance for adding something to the PATH. Therefore all variables
are interpolated with os.environ before the’re set:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = recipes
... parts = extdemo checkenv
...
... [extdemo-env]
... test-variable = foo:%%(test-variable)s
...
... [extdemo]
... recipe = zc.recipe.egg:custom
... find-links = %(server)s
... index = %(server)s/index
... include-dirs = include
... environment = extdemo-env
...
... [checkenv]
... recipe = recipes:environ
...
... """ % dict(server=link_server))
>>> print_(system(buildout), end='') # doctest: +ELLIPSIS
Develop: '/sample-buildout/recipes'
Uninstalling extdemo.
Installing extdemo.
Have environment test-variable: foo:bar
zip_safe flag not set; analyzing archive contents...
Updating checkenv.
...>>> os.environ['test-variable']
'bar'
>>> del os.environ['test-variable']Create a clean buildout.cfg w/o the checkenv recipe, and delete the recipe:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = recipes
... parts = extdemo
...
... [extdemo]
... recipe = zc.recipe.egg:custom
... find-links = %(server)s
... index = %(server)s/index
... include-dirs = include
...
... """ % dict(server=link_server))
>>> print_(system(buildout), end='') # doctest: +ELLIPSIS
Develop: '/sample-buildout/recipes'
Uninstalling checkenv.
Uninstalling extdemo.
Installing extdemo...>>> rmdir(sample_buildout, 'recipes')Controlling develop-egg generationIf you want to provide custom build options for a develop egg, you can
use the develop recipe. The recipe has the following options:setupThe path to a setup script or directory containing a startup
script. This is required.include-dirsA new-line separated list of directories to search for include
files.library-dirsA new-line separated list of directories to search for libraries
to link with.rpathA new-line separated list of directories to search for dynamic libraries
at run time.defineA comma-separated list of names of C preprocessor variables to
define.undefA comma-separated list of names of C preprocessor variables to
undefine.librariesThe name of an additional library to link with. Due to limitations
in distutils and despite the option name, only a single library
can be specified.link-objectsThe name of an link object to link against. Due to limitations
in distutils and despite the option name, only a single link object
can be specified.debugCompile/link with debugging informationforceForcibly build everything (ignore file timestamps)compilerSpecify the compiler typeswigThe path to the swig executableswig-cppMake SWIG create C++ files (default is C)swig-optsList of SWIG command line optionsTo illustrate this, we’ll use a directory containing the extdemo
example from the earlier section:>>> ls(extdemo)
- MANIFEST
- MANIFEST.in
- README
- extdemo.c
- setup.py>>> write('buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = extdemo demo
...
... [extdemo]
... setup = %(extdemo)s
... recipe = zc.recipe.egg:develop
... include-dirs = include
... define = TWO
...
... [demo]
... recipe = zc.recipe.egg
... eggs = demo
... extdemo
... entry-points = demo=demo:main
... """ % dict(extdemo=extdemo))Note that we added a define option to cause the preprocessor variable
TWO to be defined. This will cause the module-variable, ‘val’, to be
set with a value of 2.>>> print_(system(buildout), end='') # doctest: +ELLIPSIS
Develop: '/sample-buildout/demo'
Uninstalling extdemo.
Installing extdemo.
Installing demo.
...Our develop-eggs now includes an egg link for extdemo:>>> ls('develop-eggs')
- demo.egg-link
- extdemo.egg-link
- zc.recipe.egg.egg-linkand the extdemo now has a built extension:>>> contents = os.listdir(extdemo)
>>> bool([f for f in contents if f.endswith('.so') or f.endswith('.pyd')])
TrueBecause develop eggs take precedence over non-develop eggs, the demo
script will use the new develop egg:>>> print_(system(join('bin', 'demo')), end='')
2Egg Recipe API for other RecipesIt is common for recipes to accept a collection of egg specifications
and generate scripts based on the resulting working sets. The egg
recipe provides an API that other recipes can use.A recipe can reuse the egg recipe, supporting the eggs, find-links,
index, and extra-paths options. This is done by creating an
egg recipe instance in a recipes’s constructor. In the recipe’s
install script, the egg-recipe instance’s working_set method is used
to collect the requested eggs and working set.To illustrate, we create a sample recipe that is a very thin layer
around the egg recipe:>>> mkdir(sample_buildout, 'sample')
>>> write(sample_buildout, 'sample', 'sample.py',
... """
... import logging, os, sys
... import zc.recipe.egg
...
... def print_(*args):
... sys.stdout.write(' '.join(map(str, args)) + '\\n')
...
... class Sample:
...
... def __init__(self, buildout, name, options):
... self.egg = zc.recipe.egg.Scripts(buildout, name, options)
... self.name = name
... self.options = options
...
... def install(self):
... extras = self.options['extras'].split()
... requirements, ws = self.egg.working_set(extras)
... print_('Part:', self.name)
... print_('Egg requirements:')
... for r in requirements:
... print_(r)
... print_('Working set:')
... for d in ws:
... print_(d)
... print_('extra paths:', self.egg.extra_paths)
... return ()
...
... update = install
... """)Here we instantiated the egg recipe in the constructor, saving it in
an attribute. This also initialized the options dictionary.In our install method, we called the working_set method on the
instance we saved. The working_set method takes an optional sequence
of extra requirements to be included in the working set.>>> write(sample_buildout, 'sample', 'setup.py',
... """
... from setuptools import setup
...
... setup(
... name = "sample",
... entry_points = {'zc.buildout': ['default = sample:Sample']},
... install_requires = 'zc.recipe.egg',
... )
... """)>>> write(sample_buildout, 'sample', 'README.txt', " ")>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = sample
... parts = sample-part
...
... [sample-part]
... recipe = sample
... eggs = demo<0.3
... find-links = %(server)s
... index = %(server)sindex
... extras = other
... """ % dict(server=link_server))>>> import os
>>> os.chdir(sample_buildout)
>>> buildout = os.path.join(sample_buildout, 'bin', 'buildout')
>>> print_(system(buildout + ' -q'), end='')
Part: sample-part
Egg requirements:
demo<0.3
Working set:
demoneeded 1.1
other 1.0
demo 0.2
extra paths: []We can see that the options were augmented with additional data
computed by the egg recipe by looking at .installed.cfg:>>> cat(sample_buildout, '.installed.cfg')
[buildout]
installed_develop_eggs = /sample-buildout/develop-eggs/sample.egg-link
parts = sample-part
<BLANKLINE>
[sample-part]
__buildout_installed__ =
__buildout_signature__ = ...
_b = /sample-buildout/bin
_d = /sample-buildout/develop-eggs
_e = /sample-buildout/eggs
bin-directory = /sample-buildout/bin
develop-eggs-directory = /sample-buildout/develop-eggs
eggs = demo<0.3
eggs-directory = /sample-buildout/eggs
extras = other
find-links = http://localhost:27071/
index = http://localhost:27071/index
recipe = sampleIf we use the extra-paths option:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = sample
... parts = sample-part
...
... [sample-part]
... recipe = sample
... eggs = demo<0.3
... find-links = %(server)s
... index = %(server)sindex
... extras = other
... extra-paths = /foo/bar
... /spam/eggs
... """ % dict(server=link_server))Then we’ll see that reflected in the extra_paths attribute in the egg
recipe instance:>>> print_(system(buildout + ' -q'), end='')
Part: sample-part
Egg requirements:
demo<0.3
Working set:
demo 0.2
other 1.0
demoneeded 1.1
extra paths: ['/foo/bar', '/spam/eggs']Working set cachingWorking sets are cached, to improve speed on buildouts with multiple similar
parts based onzc.recipe.egg.The egg-recipe instance’s_working_sethelper method is used to make
the caching easier. It does the same job asworking_set()but with some
differences:The signature is different: all information needed to build the working set
is passed as parameters.The return value is simpler: only an instance ofpkg_resources.WorkingSetis returned.Here’s an example:>>> from zc.buildout import testing
>>> from zc.recipe.egg.egg import Eggs
>>> import os
>>> import pkg_resources
>>> recipe = Eggs(buildout=testing.Buildout(), name='fake-part', options={})
>>> eggs_dir = os.path.join(sample_buildout, 'eggs')
>>> develop_eggs_dir = os.path.join(sample_buildout, 'develop-eggs')
>>> testing.install_develop('zc.recipe.egg', develop_eggs_dir)
>>> ws = recipe._working_set(
... distributions=['zc.recipe.egg', 'demo<0.3'],
... eggs_dir=eggs_dir,
... develop_eggs_dir=develop_eggs_dir,
... index=link_server,
... )
Getting...
>>> isinstance(ws, pkg_resources.WorkingSet)
True
>>> sorted(dist.project_name for dist in ws)
['demo', 'demoneeded', 'setuptools', 'zc.buildout', 'zc.recipe.egg']We’ll monkey patch a method in theeasy_installmodule in order to verify if
the cache is working:>>> import zc.buildout.easy_install
>>> old_install = zc.buildout.easy_install.Installer.install
>>> def new_install(*args, **kwargs):
... print('Building working set.')
... return old_install(*args, **kwargs)
>>> zc.buildout.easy_install.Installer.install = new_installNow we check if the caching is working by verifying if the same working set is
built only once.>>> ws_args_1 = dict(
... distributions=['demo>=0.1'],
... eggs_dir=eggs_dir,
... develop_eggs_dir=develop_eggs_dir,
... offline=True,
... )
>>> ws_args_2 = dict(ws_args_1)
>>> ws_args_2['distributions'] = ['demoneeded']
>>> recipe._working_set(**ws_args_1)
Building working set.
<pkg_resources.WorkingSet object at ...>
>>> recipe._working_set(**ws_args_1)
<pkg_resources.WorkingSet object at ...>
>>> recipe._working_set(**ws_args_2)
Building working set.
<pkg_resources.WorkingSet object at ...>
>>> recipe._working_set(**ws_args_1)
<pkg_resources.WorkingSet object at ...>
>>> recipe._working_set(**ws_args_2)
<pkg_resources.WorkingSet object at ...>Undo monkey patch:>>> zc.buildout.easy_install.Installer.install = old_installSincepkg_resources.WorkingSetinstances are mutable, we must ensure thatworking_set()always returns a pristine copy. Otherwise callers would be
able to modify instances inside the cache.Let’s create a working set:>>> ws = recipe._working_set(**ws_args_1)
>>> sorted(dist.project_name for dist in ws)
['demo', 'demoneeded']Now we add a distribution to it:>>> dist = pkg_resources.get_distribution('zc.recipe.egg')
>>> ws.add(dist)
>>> sorted(dist.project_name for dist in ws)
['demo', 'demoneeded', 'zc.recipe.egg']Let’s call the working_set function again and see if the result remains valid:>>> ws = recipe._working_set(**ws_args_1)
>>> sorted(dist.project_name for dist in ws)
['demo', 'demoneeded']Download |
zc.recipe.filestorage | Recipe for setting up a filestorageThis recipe can be used to define a file-storage. It creates a ZConfig
file-storage database specification that can be used by other recipes to
generate ZConfig configuration files.This recipe takes an optional path option. If none is given, it creates and
uses a subdirectory of the buildout parts directory with the same name as the
part.The recipe records a zconfig option for use by other recipes.We’ll show a couple of examples, using a dictionary as a simulated buildout
object:>>> import zc.recipe.filestorage
>>> buildout = dict(
... buildout = {
... 'directory': '/buildout',
... },
... db = {
... 'path': 'foo/Main.fs',
... },
... )
>>> recipe = zc.recipe.filestorage.Recipe(
... buildout, 'db', buildout['db'])>>> print(buildout['db']['path'])
/buildout/foo/Main.fs>>> print(buildout['db']['zconfig'], end='')
<zodb>
<filestorage>
path /buildout/foo/Main.fs
</filestorage>
</zodb>>>> recipe.install()
()>>> import tempfile
>>> d = tempfile.mkdtemp()
>>> buildout = dict(
... buildout = {
... 'parts-directory': d,
... },
... db = {},
... )>>> recipe = zc.recipe.filestorage.Recipe(
... buildout, 'db', buildout['db'])>>> print(buildout['db']['path'])
/tmp/tmpQo0DTB/db/Data.fs>>> print(buildout['db']['zconfig'], end='')
<zodb>
<filestorage>
path /tmp/tmpQo0DTB/db/Data.fs
</filestorage>
</zodb>>>> recipe.install()
()>>> import os
>>> os.listdir(d)
['db']The update method doesn’t do much, as the database part’s directory
already exists, but it is present, so buildout doesn’t complain and doesn’t
accidentally run install() again:>>> recipe.update()If the storage’s directory is removed, is it re-added by the update method:>>> os.rmdir(os.path.join(d, 'db'))
>>> os.listdir(d)
[]
>>> recipe.update()
>>> os.listdir(d)
['db']This is useful in development when the directory containing the database is
removed in order to start the database from scratch.CHANGES2.0 (2023-02-10)Drop support for Python 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6.Add support for Python 3.7, 3.8, 3.9, 3.10, 3.11, PyPy3.1.1.2 (2014-02-21)Fixed: packaging bug that caused ‘pip install zc.recipe.filestorage’ to fail
with an error about missing README.txt1.1.1 (2014-02-16)Fixed: packaging bug that caused a test failure in
a test runner that didn’t use buildout to run setup.py.1.1.0 (2014-02-14)Python 3 compatibilityUsing Python’sdoctestmodule instead of deprecatedzope.testing.doctest.Removed ‘shared-blob-dir’ from blobstorage section.1.0.0 (2007-11-03)Initial release. |
zc.recipe.icu | The zc.recipe.icu recipe installs the International Component for
Unicode (ICU) library into abuildout.The recipe takes a single option, version:[icu]
recipe = zc.recipe.icu
version = 3.2 |
zc.recipe.macro | Macro Quickstartzc.recipe.macro is a set of recipes allowing sections, or even parts, to be
created dynamically from a macro section and a parameter section. This enables
the buildout to keep its data seperate from its output format.Basic UseIn the most basic use of a macro, a section invokes the macro on itself, and
uses itself as the parameter provider.
Buildout:[buildout]
parts = hard-rocker
[rock]
question = Why do I rock $${:rocking-style}?
[hard-rocker]
recipe = zc.recipe.macro
macro = rock
rocking-style = so hardResult:[hard-rocker]
recipe = zc.recipe.macro:empty
result-sections = hard-rocker
rocking-style = so hard
question = Why do I rock so hard?The recipe gets changed to zc.recipe.macro:empty, which is a do nothing recipe,
because the invoking secion must be a part in order to execute recipes, and
buildout demands that parts have a recipe, so it couldn’t be emptied.Default ValuesIt is possible to include default values for parameters in a macro.Buildout:[buildout]
parts = hard-rocker
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = so hard
[hard-rocker]
recipe = zc.recipe.macro
macro = rockResult:[hard-rocker]
recipe = zc.recipe.macro:empty
result-sections = hard-rocker
rocking-style = so hard
question = Why do I rock so hard?Creating PartsOf course, there wouldn’t much point to this if one could only create sections
with a dummy recipe. This is where the result-recipe option comes in.Buildout:[buildout]
parts = hard-rocker
[rock]
question = Why do I rock $${:rocking-style}?
[hard-rocker]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:test1
macro = rock
rocking-style = so hardResult:[hard-rocker]
recipe = zc.recipe.macro:test1
result-sections = hard-rocker
question = Why do I rock so hard?
rocking-style = so hardTargetsOften, one wants to create multiple new sections. This is possible with the
targets option. This is only useful, however, if one can provide multiple
sources for parameters. Fortunately, you can. Each new section can optionally
be followed by a colon and the name of a section to use for parameters.Buildout:[buildout]
parts = rockers hard-rocker socks-rocker tired-rocker
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = $${:rocking-style}
[hard-rocker-parameters]
rocking-style = so hard
[socks-rocker-parameters]
rocking-style = my socks
[tired-rocker-parameters]
rocking-style = all night
[rockers]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:empty
macro = rock
targets =
hard-rocker:hard-rocker-parameters
socks-rocker:socks-rocker-parameters
tired-rocker:tired-rocker-parametersResult:[rockers]
recipe = zc.recipe.macro:empty
result-sections = hard-rocker socks-rocker tired-rocker
[hard-rocker]
recipe = zc.recipe.macro:empty
rocking-style = so hard
question = Why do I rock so hard?
[socks-rocker]
recipe = zc.recipe.macro:empty
rocking-style = my socks
question = Why do I rock my socks?
[tired-rocker]
recipe = zc.recipe.macro:empty
rocking-style = all night
question = Why do I rock all night?In the previous example we hardcoded the result parts after the invoker in
${buildout:parts}. This is brittle, because someone might change the names of
the targets or alphabetize the parts list. An invocation will have a list of
the sections it modified in its result-sections variable, which is created when
the macro is executed.Buildout:[buildout]
parts = ${rockers:result-sections}
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = $${:rocking-style}
[hard-rocker-parameters]
rocking-style = so hard
[socks-rocker-parameters]
rocking-style = my socks
[tired-rocker-parameters]
rocking-style = all night
[rockers]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:test1
macro = rock
targets =
hard-rocker:hard-rocker-parameters
socks-rocker:socks-rocker-parameters
tired-rocker:tired-rocker-parametersResult:[rockers]
result-sections = hard-rocker socks-rocker tired-rocker
[hard-rocker]
question = Why do I rock so hard?
recipe = zc.recipe.macro:test1
rocking-style = so hard
[socks-rocker]
question = Why do I rock my socks?
recipe = zc.recipe.macro:test1
rocking-style = my socks
[tired-rocker]
question = Why do I rock all night?
recipe = zc.recipe.macro:test1
rocking-style = all nightOrder of Precedence for Recipes for Result SectionsThe source for therecipeoption for result sections has a particular
precedence, as follows:1) recipe in the parameters section of the macro target
2) result-recipe in the parameters section for the macro target
3) result-recipe in the macro invocation
4) recipe in the macro definitionThe following tests will illustrate these rules, starting with rule 4 and
building up.In the following buildout, rock:recipe will be used in the [hard-rockers]
section as the recipe, because of rule 4.
Buildout:[buildout]
parts = rockers
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = $${:rocking-style}
recipe = zc.recipe.macro:test4
[hard-rocker-parameters]
rocking-style = so hard
[rockers]
recipe = zc.recipe.macro
macro = rock
targets = hard-rocker:hard-rocker-parametersResult:[hard-rocker]
question = Why do I rock so hard?
recipe = zc.recipe.macro:test4
rocking-style = so hardIn the following buildout, ${rockers:result-recipe} will be used because of rule 3.
Buildout:[buildout]
parts = rockers
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = $${:rocking-style}
recipe = zc.recipe.macro:test4
[hard-rocker-parameters]
rocking-style = so hard
[rockers]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:test3
macro = rock
targets = hard-rocker:hard-rocker-parametersResult:[hard-rocker]
question = Why do I rock so hard?
recipe = zc.recipe.macro:test3
rocking-style = so hardIn the following buildout, ${hard-rocker-paramers:result-recipe} will be used because of rule 2.
Buildout:[buildout]
parts = rockers
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = $${:rocking-style}
recipe = zc.recipe.macro:test4
[hard-rocker-parameters]
result-recipe = zc.recipe.macro:test2
rocking-style = so hard
[rockers]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:test3
macro = rock
targets = hard-rocker:hard-rocker-parametersResult:[hard-rocker]
question = Why do I rock so hard?
recipe = zc.recipe.macro:test2
rocking-style = so hardIn the following buildout, ${hard-rocker-parameters:recipe} will be used because of rule 1.
Buildout:[buildout]
parts = rockers
[rock]
question = Why do I rock $${:rocking-style}?
rocking-style = $${:rocking-style}
recipe = zc.recipe.macro:test4
[hard-rocker-parameters]
recipe = zc.recipe.macro:test1
result-recipe = zc.recipe.macro:test2
rocking-style = so hard
[rockers]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:test3
macro = rock
targets = hard-rocker:hard-rocker-parametersResult:[hard-rocker]
question = Why do I rock so hard?
recipe = zc.recipe.macro:test1
rocking-style = so hardSpecial Variableszc.recipe.macro uses __name__ to mean the name of the section the macro is
being invoked upon. This allows one to not know the name of particular
section, but still use it in output.Buildout:[buildout]
parts = rockers
[rock]
question = Why does $${:__name__} rock $${:rocking-style}?
[hard-rocker-parameters]
rocking-style = so hard
[socks-rocker-parameters]
rocking-style = my socks
[tired-rocker-parameters]
rocking-style = all night
[rockers]
recipe = zc.recipe.macro
result-recipe = zc.recipe.macro:empty
macro = rock
targets =
hard-rocker:hard-rocker-parameters
socks-rocker:socks-rocker-parameters
tired-rocker:tired-rocker-parametersResult:[rockers]
recipe = zc.recipe.macro:empty
result-sections = hard-rocker socks-rocker tired-rocker
[hard-rocker]
question = Why does hard-rocker rock so hard?
recipe = zc.recipe.macro:empty
[socks-rocker]
question = Why does socks-rocker rock my socks?
recipe = zc.recipe.macro:empty
[tired-rocker]
question = Why does tired-rocker rock all night?
recipe = zc.recipe.macro:emptyCHANGES1.3.0 (2009-07-22)The recipe option for result sections is now pulled from the following
sources, in this order:recipe in the parameters section of the macro targetresult-recipe in the parameters section for the macro targetresult-recipe in the macro invocationrecipe in the macro definitionCorrect a rest error, that prevent the package of being installed with
docutils 0.4.1.2.5 (2009-03-05)Removed version sections from the documentation.Improved test coverage.Put QUICKSTART.txt under test, using manuel.Macro invocations will grow a result-sections value that lists the sections
they modified or created.README.txt is now mostly Manuellified.1.2.4 (2008-07-18)Fixed a bug that made self-targetting invocations fail when the macro utilized
default values and the option that read the default came out the Options
iteration first, added a regression test.Changed the test setup so that buildouts are tested by calling methods rather
than creating a subprocess. This allows for the –coverage flage to work in
bin/test, and makes debugging and mimmicking the test output significantly
easier.Fixed addition of targets so that they will show up properly when one calls
buildout.keys().1.2.3 (2008-07-11)Fixed a bug in the CHANGES ReST1.2.2 (2008-07-11)Fixed a bug in setup.py where setuptools was not being importedChanged date format in CHANGES.txt to YYYY-MM-DD1.2.1 (2008-07-10)Fixed a typo in the quickstart1.2.0 (2008-07-10)First release |
zc.recipe.rhrc | This package provides a zc.buildout recipe for creating Red-Hat Linux
compatible run-control scripts.ContentsChanges1.4.2 (2012-12-20)1.4.1 (2012-08-31)1.4.0 (2012-05-18)1.3.0 (2010/05/26)New FeaturesBugs Fixed1.2.0 (2009/04/06)1.1.0 (2008/02/01)1.0.0 (2008/01/15)Detailed DocumentationCreate Red-Hat Linux (chkconfig) rc scriptsWorking with existing control scriptsMultiple processesIndependent processesDeploymentsProcess ManagementRegression TestsDownloadChanges1.4.2 (2012-12-20)Fixed: Errors were raised if stopping a run script failed duringuninstall. This could cause a buildout to be wedged, because
you couldn’t uninstall a broken/missing run script.1.4.1 (2012-08-31)Fixed: Processes weren’t started on update.In a perfect world, this wouldn’t be necessary, as in the
update case, the process would already be running, however,
it’s helpful to make sure the process is running by trying to
start it.1.4.0 (2012-05-18)Added optional process-management support. If requested, then run
scripts are run as part of install and uninstall.Fixed: missingtestdependency onzope.testing1.3.0 (2010/05/26)New FeaturesA new independent-processes option causes multiple processes to be
restarted independently, rather then stoping all of the and the
starting all of them.Bugs FixedGenerated run scripts had trailing whitespace.1.2.0 (2009/04/06)displays the name of the script being run
for each script when it is started, stopped, or restarted1.1.0 (2008/02/01)Use the deployment name option (as provided by zc.recipe.deployment
0.6.0 and later) if present when generating script names.Use the deployment rc-directory as the destination when a deployment
is used.Use /sbin/chkconfig rather than chkconfig, as I’m told it is always in
that location and rarely in anyone’s path. :)1.0.0 (2008/01/15)Initial public releaseDetailed DocumentationCreate Red-Hat Linux (chkconfig) rc scriptsThe zc.recipes.rhrc recipe creates Red-Hat Linux (chkconfig) rc
scripts. It can create individual rc scripts, as well as combined rc
scripts that start multiple applications.The recipe has a parts option that takes the names of sections that
define run scripts. They should either:Define a run-script option that contains a one-line shell script, orThe file /etc/init.d/PART should exist, where PART is the part name.A simple example will, hopefully make this clearer.>>> demo = tmpdir('demo')>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
...
... [zope]
... run-script = /opt/zope/bin/zopectl -C /etc/zope.conf
... """ % dict(dest=demo))Normally the recipe writes scripts to /etc/init.d. We can override
the destination, which we’ve done here, using a demonstration
directory. We specified a that it should get run-script source from
the zope section. Here the zope section is simply a configuration
section with a run-script option set directly, but it could have been
a part with a run-script option computed from the recipe.If we run the buildout:>>> print system('bin/buildout'),
Installing zoperc.We’ll get a zoperc script in our demo directory:>>> ls(demo)
- zoperc>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# This script is for adminstrator convenience. It should
# NOT be installed as a system startup script!
<BLANKLINE>
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
/opt/zope/bin/zopectl -C /etc/zope.conf $*
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
/opt/zope/bin/zopectl -C /etc/zope.conf $*
<BLANKLINE>
;;
esac
<BLANKLINE>There are a couple of things to note about the generated script:It uses $* to pass arguments, so arguments can’t be quoted. This is
OK because the arguments will be simple verbs like start and stop.It includes a comment saying that the script shouldn’t be used as a
system startup script.For the script to be used for system startup, we need to specify
run-level information. We can to that using the chkconfig option:>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
...
... [zope]
... run-script = /opt/zope/bin/zopectl -C /etc/zope.conf
... """ % dict(dest=demo))Here we included a chkconfig option saying that Zope should be started
at run levels 3, 4, and 5 and that it’s start and stop ordered should
be 90 and 10.For demonstration purposes, we don’treallywant to run chkconfig,
so we use the chkconfigcommand option to tell the recipe to run echo
instead.>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
Installing zoperc.
--add zopercNow the script contains a chkconfig comment:>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
/opt/zope/bin/zopectl -C /etc/zope.conf $* \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
/opt/zope/bin/zopectl -C /etc/zope.conf $* \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>We can specify a user that the script should be run as:>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
... user = zope
...
... [zope]
... run-script = /opt/zope/bin/zopectl -C /etc/zope.conf
... """ % dict(dest=demo))>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add zopercNote the –del output. If we hadn’t set the chkconfigcommand to echo,
then chkconfig –del would have been run on the zoperc script.>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
su zope -c \
"/opt/zope/bin/zopectl -C /etc/zope.conf $*" \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
su zope -c \
"/opt/zope/bin/zopectl -C /etc/zope.conf $*" \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>Note that now the su command is used to run the script. Because the
script is included in double quotes, it can’t contain double
quotes. (The recipe makes no attempt to escape double quotes.)Also note that now the script must be run as root, so the generated
script checks that root is running it.If we say the user is root:>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
... user = root
...
... [zope]
... run-script = /opt/zope/bin/zopectl -C /etc/zope.conf
... """ % dict(dest=demo))Then the generated script won’t su, but it will still check that root
is running it:>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add zoperc>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
/opt/zope/bin/zopectl -C /etc/zope.conf $* \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
/opt/zope/bin/zopectl -C /etc/zope.conf $* \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>A part that defines a run script can also define environment-variable
settings to be used by the rc script by supplying an env option:>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
... user = zope
...
... [zope]
... run-script = /opt/zope/bin/zopectl -C /etc/zope.conf
... env = LD_LIBRARY_PATH=/opt/foolib
... """ % dict(dest=demo))>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add zoperc>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/zope.conf $*" \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/zope.conf $*" \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>Working with existing control scriptsIn the example above, we generated a script based on a command line.
If we have a part that creates a control script on it’s own, then it
can ommit the run-script option and it’s already created run script
will be used. Let’s create a run script ourselves:>>> write(demo, 'zope', '/opt/zope/bin/zopectl -C /etc/zope.conf $*')Now we can remove the run-script option from the Zope section:>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
... user = zope
...
... [zope]
... env = LD_LIBRARY_PATH=/opt/foolib
... """ % dict(dest=demo))>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add zoperc>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
echo zope:
/demo/zope "$@" \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
echo zope:
/demo/zope "$@" \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>Here we just invoke the existing script. Note that don’t pay any
reflect the env or user options in the script. When an existing
script is used, it is assumed to be complete.>>> import os
>>> os.remove(join(demo, 'zope'))Multiple processesSometimes, you need to start multiple processes. You can specify
multiple parts. For example, suppose we wanted to start 2 Zope
instances:>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = instance1 instance2
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
... user = zope
...
... [instance1]
... run-script = /opt/zope/bin/zopectl -C /etc/instance1.conf
... env = LD_LIBRARY_PATH=/opt/foolib
...
... [instance2]
... """ % dict(dest=demo))>>> write(demo, 'instance2', '')Note that for instance 2, we are arranging for the script to pre-exist.>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add zoperc>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
echo instance2:
/demo/instance2 "$@" \
</dev/null
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*" \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*" \
</dev/null
<BLANKLINE>
echo instance2:
/demo/instance2 "$@" \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>Now the rc script starts both instances. Note that it stops them in
reverese order. This isn’t so important in a case like this, but
would be more important if a later script depended on an earlier one.In addition to the zoperc script, we got scripts for the instance with
the run-script option:>>> ls(demo)
- instance2
- zoperc
- zoperc-instance1>>> cat(demo, 'zoperc-instance1')
#!/bin/sh
<BLANKLINE>
# This script is for adminstrator convenience. It should
# NOT be installed as a system startup script!
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*"
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*"
<BLANKLINE>
;;
esac
<BLANKLINE>The individual scripts don’t have chkconfig information.Independent processesNormally, processes are assumed to be dependent and are started in
order, stopped in referese order, and, on restart, are all stopped and
then all started.If the independent-processes option is used, then the generated master
run script will treat the processes as independent and restart
processed individually. With lots of independent processes, this can
reduce the amount of time individual processes are down.>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = instance1 instance2
... dest = %(dest)s
... chkconfig = 345 90 10
... chkconfigcommand = echo
... user = zope
... independent-processes = true
...
... [instance1]
... run-script = /opt/zope/bin/zopectl -C /etc/instance1.conf
... env = LD_LIBRARY_PATH=/opt/foolib
...
... [instance2]
... """ % dict(dest=demo))>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add zoperc>>> cat(demo, 'zoperc')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su zope -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*" \
</dev/null
<BLANKLINE>
echo instance2:
/demo/instance2 "$@" \
</dev/nullDeploymentsThe zc.recipe.rhrc recipe is designed to work with the
zc.recipe.deployment recipe. You can specify the name of a deployment
section. If a deployment section is specified then:the deployment name will be used for the rc scriptsthe user from the deployment section will be used if a user isn’t
specified in the rc script’s own section.the rc-directory option from the deployment will be used if
destination isn’t specified.>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [deployment]
... name = acme
... user = acme
... rc-directory = %(dest)s
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = instance1 instance2
... chkconfig = 345 90 10
... chkconfigcommand = echo
... deployment = deployment
...
... [instance1]
... run-script = /opt/zope/bin/zopectl -C /etc/instance1.conf
... env = LD_LIBRARY_PATH=/opt/foolib
...
... [instance2]
... """ % dict(dest=demo))If a deployment is used, then any existing scripts must be
prefixed with the deployment name. We’ll rename the instance2 script
to reflect that:>>> os.rename(join(demo, 'instance2'), join(demo, 'acme-instance2'))>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del zoperc
Installing zoperc.
--add acme>>> ls(demo)
- acme
- acme-instance1
- acme-instance2>>> cat(demo, 'acme')
#!/bin/sh
<BLANKLINE>
# the next line is for chkconfig
# chkconfig: 345 90 10
# description: please, please work
<BLANKLINE>
<BLANKLINE>
if [ $(whoami) != "root" ]; then
echo "You must be root."
exit 1
fi
<BLANKLINE>
case $1 in
stop)
<BLANKLINE>
echo acme-instance2:
/demo/acme-instance2 "$@" \
</dev/null
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su acme -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*" \
</dev/null
<BLANKLINE>
;;
restart)
<BLANKLINE>
${0} stop
sleep 1
${0} start
<BLANKLINE>
;;
*)
<BLANKLINE>
LD_LIBRARY_PATH=/opt/foolib \
su acme -c \
"/opt/zope/bin/zopectl -C /etc/instance1.conf $*" \
</dev/null
<BLANKLINE>
echo acme-instance2:
/demo/acme-instance2 "$@" \
</dev/null
<BLANKLINE>
;;
esac
<BLANKLINE>Edge case, when we remove the part, we uninstall acme:>>> write('buildout.cfg',
... """
... [buildout]
... parts =
... """)
>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
--del acmeProcess ManagementNormally, the recipe doesn’t start and stop processes. If we want it
to, we can use the process-management option with a ‘true’ value.>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... process-management = true
...
... [zope]
... run-script = echo zope
... """ % dict(dest=demo))When the part is installed, the process is started:>>> print system('bin/buildout'),
Installing zoperc.
zope startIt also gets started when the part updates. This is just to make sure
it is running.>>> print system('bin/buildout'),
Updating zoperc.
zope startIf we update the part, then when the part is uninstalled and
reinstalled, the process will be stopped and started. We’ll often
force this adding a digest option that exists solely to force a
reinstall, typically because something else in the buildout has
changed.>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
... process-management = true
... digest = 1
...
... [zope]
... run-script = echo zope
... """ % dict(dest=demo))>>> print system('bin/buildout'),
Uninstalling zoperc.
Running uninstall recipe.
zope stop
Installing zoperc.
zope start>>> print system('bin/buildout buildout:parts='),
Uninstalling zoperc.
Running uninstall recipe.
zope stopRegression TestsException formatting bugIf we do not provide a runscript, we get an exception (bug was: improperly
formatted exception string, contained literal ‘%s’):>>> write('buildout.cfg',
... """
... [buildout]
... parts = zoperc
...
... [zoperc]
... recipe = zc.recipe.rhrc
... parts = zope
... dest = %(dest)s
...
... [zope]
... """ % dict(dest=demo))
>>> print system('bin/buildout'),
Installing zoperc.
zc.recipe.rhrc: Part zope doesn't define run-script and /demo/zope doesn't exist.
While:
Installing zoperc.
Error: No script for zopeDownload |
zc.recipe.script | Many deployments provide scripts that tie the configurations into the
software. This is often done to make it easier to work with specific
deployments of the software.The conventional Unix file hierarchy doesn’t really provide a good
shared place for such scripts; the zc.recipe.deployment:script recipe
generates these scripts in the deployment’s bin-directory, but we’d
rather have the resulting scripts associated with the deployment itself.The options for the recipe are the same as those for the
zc.recipe.egg:script recipe, with the addition of a required deployment
setting. The etc-directory from the deployment is used instead of the
buildout’s bin-directory. This allows deployment-specific information
to be embedded in the script via the initialization setting.Let’s take a look at a simple case. We’ll need a package with a
console_script entry point:>>> write('setup.py', '''\
... from setuptools import setup
... setup(
... name="testpkg",
... package_dir={"": "src"},
... py_modules=["testmodule"],
... zip_safe=False,
... entry_points={
... "console_scripts": [
... "myscript=testmodule:main",
... ],
... },
... )
... ''')>>> mkdir('src')
>>> write('src', 'testmodule.py', '''\
... some_setting = "42"
... def main():
... print some_setting
... ''')>>> write('buildout.cfg',
... '''
... [buildout]
... develop = .
... parts = somescript
...
... [mydep]
... recipe = zc.recipe.deployment
... prefix = %s
... user = %s
... etc-user = %s
...
... [somescript]
... recipe = zc.recipe.script
... deployment = mydep
... eggs = testpkg
... scripts = myscript
... initialization =
... import testmodule
... testmodule.some_setting = "24"
... ''' % (sample_buildout, user, user))>>> print system(join('bin', 'buildout')), # doctest: +NORMALIZE_WHITESPACE
Develop: 'PREFIX/.'
Installing mydep.
zc.recipe.deployment:
Creating 'PREFIX/etc/mydep',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/cache/mydep',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/lib/mydep',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/log/mydep',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/var/run/mydep',
mode 750, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/cron.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/init.d',
mode 755, user 'USER', group 'GROUP'
zc.recipe.deployment:
Creating 'PREFIX/etc/logrotate.d',
mode 755, user 'USER', group 'GROUP'
Installing somescript.
Generated script 'PREFIX/etc/mydep/myscript'.>>> print ls("etc/mydep")
drwxr-xr-x USER GROUP etc/mydep>>> cat("etc/mydep/myscript") # doctest: +NORMALIZE_WHITESPACE
#!/usr/bin/python
<BLANKLINE>
import sys
sys.path[0:0] = [
'PREFIX/src',
]
<BLANKLINE>
import testmodule
testmodule.some_setting = "24"
<BLANKLINE>
import testmodule
<BLANKLINE>
if __name__ == '__main__':
sys.exit(testmodule.main())Release history1.0.2 (2014-08-19)Fix packaging bug (include src/zc/recipe/script/README.txt).1.0.1 (2014-08-19)Initial public release.1.0.0 (2011-12-29)Initial release (ZC internal). |
zc.recipe.testrunner | ContentsChange History3.0 (2023-02-08)2.2 (2020-11-30)2.1 (2019-05-14)2.0.0 (2013-02-10)1.4.0 (2010-08-27)1.3.0 (2010-06-09)1.2.1 (2010-08-24)1.2.0 (2009-03-23)1.1.0 (2008-08-25)1.0.0 (2007-11-04)1.0.0b8 (2007-07-17)1.0.0b7 (2007-04-26)1.0.0b6 (2007-02-25)1.0.0b5 (2007-01-24)1.0.0b4 (2006-10-24)1.0.0b3 (2006-10-16)1.0.0b21.0.0b11.0.0a31.0.0a21.0.0a1Detailed DocumentationThis recipe generates zope.testing test-runner scripts for testing a
collection of eggs.Example usage inbuildout.cfg:[buildout]
parts = test
[test]
recipe = zc.recipe.testrunner
eggs = <eggs to test>Change History3.0 (2023-02-08)Add support for Python 3.10, 3.11.Drop support for Python 2.7, 3.5, 3.6.2.2 (2020-11-30)Add support for Python 3.9, PyPy2 and PyPy3.2.1 (2019-05-14)Add support for Python 3.5 up to 3.8a3.2.0.0 (2013-02-10)Work with buildout 2.This was accomplised by starting from 1.3.0 then:Merge fixes from 1.2.1
(svn://svn.zope.org/repos/main/zc.recipe.testrunner/tags/1.2.1)
Excluding nailing zope.testing version. That fixes a bunch of
windows issues1.4.0 (2010-08-27)Update to using zc.buildout 1.5.0 script generation. This adds the
following options: include-site-packages, allowed-eggs-from-site-packages,
extends, and exec-sitecustomize.Merge fixes from 1.2.1
(svn://svn.zope.org/repos/main/zc.recipe.testrunner/tags/1.2.1)
Excluding nailing zope.testing version. That fixes a bunch of
windows issues1.3.0 (2010-06-09)Updated tests to run with the last versions of all modules.Removed the usage of the deprecated zope.testing.doctest, therby also
dropping Python 2.3 support.Started using zope.testrunner instead of zope.testing.testrunner.1.2.1 (2010-08-24)Fixed a lot of windows issuesNailed versions to ZTK 1.0a2 (oh well, we have to have at least some stability)Fixed some other test failures that seemed to come from other packages1.2.0 (2009-03-23)Added a relative-paths option to use egg, test, and
working-directory paths relative to the test script.1.1.0 (2008-08-25)Requiring at least zope.testing 3.6.0.Fixed a bug: Parallel runs of layers failed when using
working-directory parameter.1.0.0 (2007-11-04)Preparing stable release.1.0.0b8 (2007-07-17)Added the ability to useinitializationoption that will be inserted into
the bin/test after the environment is set up.1.0.0b7 (2007-04-26)Feature ChangesAdded optional optionenvironmentthat allows defining a section in your
buildout.cfg to specify environment variables that should be set before
running the tests.1.0.0b6 (2007-02-25)Feature ChangesIf the working directory is not specified, or specified as the empty
string, an empty part directory is created for the tests to run in.1.0.0b5 (2007-01-24)Bugs fixedWhen:the working-directory option was used,and the test runner needed to restart itselfand the test runner was run with a relative path (e.g. bin/test)then the testrunner could not restart itself successfully because the
relative path in sys.argv[0] was no-longer valid.Now we convert sys.argv[0] to an absolute path.1.0.0b4 (2006-10-24)Feature ChangesAdded a working-directoy option to specify a working directory for
the generated script.1.0.0b3 (2006-10-16)Updated to work with (not get a warning from) zc.buildout 1.0.0b10.1.0.0b2Added a defaults option to specify testrunner default options.1.0.0b1Updated to work with zc.buildout 1.0.0b5.1.0.0a3Added a defaults option that lets you specify test-runner default
options.1.0.0a2Now provide a extra-paths option for including extra paths in test
scripts. This is useful when eggs depend on Python packages not
packaged as eggs.1.0.0a1Initial public versionDetailed DocumentationTest-Runner RecipeThe test-runner recipe, zc.recipe.testrunner, creates a test runner
for a project.The test-runner recipe has several options:eggsThe eggs option specified a list of eggs to test given as one ore
more setuptools requirement strings. Each string must be given on
a separate line.scriptThe script option gives the name of the script to generate, in the
buildout bin directory. Of the option isn’t used, the part name
will be used.extra-pathsOne or more extra paths to include in the generated test script.defaultsThe defaults option lets you specify testrunner default
options. These are specified as Python source for an expression
yielding a list, typically a list literal.working-directoryThe working-directory option lets to specify a directory where the
tests will run. The testrunner will change to this directory when
run. If the working directory is the empty string or not specified
at all, the recipe will create a working directory among the parts.environmentA set of environment variables that should be exported before
starting the tests.initializationProvide initialization code to run before running tests.relative-pathsUse egg, test, and working-directory paths relative to the test script.(Note that, at this time, due to limitations in the Zope test runner, the
distributions cannot be zip files. TODO: Fix the test runner!)To illustrate this, we’ll create a pair of projects in our sample
buildout:>>> mkdir(sample_buildout, 'demo')
>>> mkdir(sample_buildout, 'demo', 'demo')
>>> write(sample_buildout, 'demo', 'demo', '__init__.py', '')
>>> write(sample_buildout, 'demo', 'demo', 'tests.py',
... '''
... import unittest
...
... class TestDemo(unittest.TestCase):
... def test(self):
... pass
...
... def test_suite():
... loader = unittest.TestLoader()
... return loader.loadTestsFromTestCase(TestDemo)
... ''')>>> write(sample_buildout, 'demo', 'setup.py',
... """
... from setuptools import setup
...
... setup(name = "demo")
... """)>>> write(sample_buildout, 'demo', 'README.txt', '')>>> mkdir(sample_buildout, 'demo2')
>>> mkdir(sample_buildout, 'demo2', 'demo2')
>>> write(sample_buildout, 'demo2', 'demo2', '__init__.py', '')
>>> write(sample_buildout, 'demo2', 'demo2', 'tests.py',
... '''
... import unittest
...
... class Demo2Tests(unittest.TestCase):
... def test2(self):
... pass
...
... def test_suite():
... loader = unittest.TestLoader()
... return loader.loadTestsFromTestCase(Demo2Tests)
... ''')>>> write(sample_buildout, 'demo2', 'setup.py',
... """
... from setuptools import setup
...
... setup(name = "demo2", install_requires= ['demoneeded'])
... """)>>> write(sample_buildout, 'demo2', 'README.txt', '')Demo 2 depends on demoneeded:>>> mkdir(sample_buildout, 'demoneeded')
>>> mkdir(sample_buildout, 'demoneeded', 'demoneeded')
>>> write(sample_buildout, 'demoneeded', 'demoneeded', '__init__.py', '')
>>> write(sample_buildout, 'demoneeded', 'demoneeded', 'tests.py',
... '''
... import unittest
...
... class TestNeeded(unittest.TestCase):
... def test_needed(self):
... pass
...
... def test_suite():
... loader = unittest.TestLoader()
... return loader.loadTestsFromTestCase(TestNeeded)
... ''')>>> write(sample_buildout, 'demoneeded', 'setup.py',
... """
... from setuptools import setup
...
... setup(name = "demoneeded")
... """)>>> write(sample_buildout, 'demoneeded', 'README.txt', '')We’ll update our buildout to install the demo project as a
develop egg and to create the test script:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo demoneeded demo2
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs =
... demo
... demo2
... script = test
... """)Note that we specified both demo and demo2 in the eggs
option and that we put them on separate lines.We also specified the offline option to run the buildout in offline mode.Now when we run the buildout:>>> import os
>>> os.chdir(sample_buildout)
>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')We get a test script installed in our bin directory:>>> ls(sample_buildout, 'bin')
- buildout
- testWe also get a part directory for the tests to run in:>>> ls(sample_buildout, 'parts')
d testdemoAnd updating leaves its contents intact:>>> _ = system(os.path.join(sample_buildout, 'bin', 'test') +
... ' -q --coverage=coverage')
>>> ls(sample_buildout, 'parts', 'testdemo')
d coverage
>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')
>>> ls(sample_buildout, 'parts', 'testdemo')
d coverageWe can run the test script to run our demo test:>>> print_(system(os.path.join(sample_buildout, 'bin', 'test') + ' -vv'),
... end='')
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in 0.001 seconds.
Running:
test (demo.tests.TestDemo...)
test2 (demo2.tests.Demo2Tests...)
Ran 2 tests with 0 failures, 0 errors and 0 skipped in 0.001 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in 0.001 seconds.Note that we didn’t run the demoneeded tests. Tests are only run for
the eggs listed, not for their dependencies.If we leave the script option out of the configuration, then the test
script will get it’s name from the part:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> ls(sample_buildout, 'bin')
- buildout
- testdemoWe can run the test script to run our demo test:>>> print_(system(os.path.join(sample_buildout, 'bin', 'testdemo') + ' -q'),
... end='')
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in 0.001 seconds.
Ran 1 tests with 0 failures, 0 errors and 0 skipped in 0.001 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in 0.001 seconds.If we need to include other paths in our test script, we can use the
extra-paths option to specify them:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import sys
sys.path[0:0] = [
...
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir('/sample-buildout/parts/testdemo')
<BLANKLINE>
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run([
'--test-path', '/sample-buildout/demo',
]))We can use the working-directory option to specify a working
directory:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... working-directory = /foo/bar
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import sys
sys.path[0:0] = [
...
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir('/foo/bar')
<BLANKLINE>
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run([
'--test-path', '/sample-buildout/demo',
]))Now that out tests use a specified working directory, their designated
part directory is gone:>>> ls(sample_buildout, 'parts')If we need to specify default options, we can use the defaults
option. For example, Zope 3 applications typically define test suites
in modules named ftests or tests. The default test runner behaviour
is to look in modules named tests. To specify that we want to look in
tests and ftests module, we’d supply a default for the –tests-pattern
option. If we like dots, we could also request more verbose output
using the -v option:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... defaults = ['--tests-pattern', '^f?tests$',
... '-v'
... ]
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import sys
sys.path[0:0] = [
...
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir('/sample-buildout/parts/testdemo')
<BLANKLINE>
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run((['--tests-pattern', '^f?tests$',
'-v'
]) + [
'--test-path', '/sample-buildout/demo',
]))Some things to note from this example:Parentheses are placed around the given expression.Leading whitespace is removed.To demonstrate theenvironmentoption, we first update the tests to
include a check for an environment variable:>>> write(sample_buildout, 'demo', 'demo', 'tests.py',
... '''
... import unittest
... import os
...
... class DemoTests(unittest.TestCase):
... def test(self):
... self.assertEqual('42', os.environ.get('zc.recipe.testrunner', '23'))
...
... def test_suite():
... loader = unittest.TestLoader()
... return loader.loadTestsFromTestCase(DemoTests)
... ''')Running them with the current buildout will produce a failure:>>> print_(system(os.path.join(sample_buildout, 'bin', 'testdemo')
... + ' -vv'),
... end='') # doctest: +ELLIPSIS
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in 0.001 seconds.
Running:
test (demo.tests.DemoTests...) (... s)
<BLANKLINE>
<BLANKLINE>
Failure in test test (demo.tests.DemoTests...)
Traceback (most recent call last):
...
AssertionError: '42' != '23'
...
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.001 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in 0.001 seconds.
<BLANKLINE>
Tests with failures:
test (demo.tests.DemoTests...)Let’s update the buildout to specify the environment variable for the test
runner:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... environment = testenv
...
... [testenv]
... zc.recipe.testrunner = 42
... """)We run buildout and see that the test runner script now includes setting up
the environment variable. Also, the tests pass again:>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import sys
sys.path[0:0] = [
...
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir('/sample-buildout/parts/testdemo')
os.environ['zc.recipe.testrunner'] = '42'
<BLANKLINE>
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run([
'--test-path', '/sample-buildout/demo',
]))>>> print_(system(os.path.join(sample_buildout, 'bin', 'testdemo')+' -vv'),
... end='')
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in 0.001 seconds.
Running:
test (demo.tests.DemoTests...)
Ran 1 tests with 0 failures, 0 errors and 0 skipped in 0.001 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in 0.001 seconds.One can add initialization steps in the buildout. These will be added to the
end of the script:>>> write(sample_buildout, 'buildout.cfg',
... r"""
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... defaults = ['--tests-pattern', '^f?tests$',
... '-v'
... ]
... initialization = sys.stdout.write('Hello all you egg-laying pythons!\n')
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import sys
sys.path[0:0] = [
...
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir('/sample-buildout/parts/testdemo')
sys.stdout.write('Hello all you egg-laying pythons!\n')
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run((['--tests-pattern', '^f?tests$',
'-v'
]) + [
'--test-path', '/sample-buildout/demo',
]))This will also work with a multi-line initialization section:>>> write(sample_buildout, 'buildout.cfg',
... r"""
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... defaults = ['--tests-pattern', '^f?tests$',
... '-v'
... ]
... initialization = sys.stdout.write('Hello all you egg-laying pythons!\n')
... sys.stdout.write('I thought pythons were live bearers?\n')
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import sys
sys.path[0:0] = [
...
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir('/sample-buildout/parts/testdemo')
sys.stdout.write('Hello all you egg-laying pythons!\n')
sys.stdout.write('I thought pythons were live bearers?\n')
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run((['--tests-pattern', '^f?tests$',
'-v'
]) + [
'--test-path', '/sample-buildout/demo',
]))If the relative-paths option is used, egg (and extra) paths are
generated relative to the test script.>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... ${buildout:directory}/sources
... relative-paths = true
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import os
<BLANKLINE>
join = os.path.join
base = os.path.dirname(os.path.abspath(os.path.realpath(__file__)))
base = os.path.dirname(base)
<BLANKLINE>
import sys
sys.path[0:0] = [
join(base, 'demo'),
...
'/usr/local/zope/lib/python',
join(base, 'sources'),
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir(join(base, 'parts/testdemo'))
<BLANKLINE>
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run([
'--test-path', join(base, 'demo'),
]))The relative-paths option can be specified at the buildout level:>>> write(sample_buildout, 'buildout.cfg',
... """
... [buildout]
... develop = demo
... parts = testdemo
... offline = true
... relative-paths = true
...
... [testdemo]
... recipe = zc.recipe.testrunner
... eggs = demo
... extra-paths = /usr/local/zope/lib/python
... ${buildout:directory}/sources
... """)>>> print_(system(os.path.join(sample_buildout, 'bin', 'buildout') + ' -q'),
... end='')>>> cat(sample_buildout, 'bin', 'testdemo') # doctest: +ELLIPSIS
#!/usr/local/bin/python2.4
<BLANKLINE>
import os
<BLANKLINE>
join = os.path.join
base = os.path.dirname(os.path.abspath(os.path.realpath(__file__)))
base = os.path.dirname(base)
<BLANKLINE>
import sys
sys.path[0:0] = [
join(base, 'demo'),
...
'/usr/local/zope/lib/python',
join(base, 'sources'),
]
<BLANKLINE>
import os
sys.argv[0] = os.path.abspath(sys.argv[0])
os.chdir(join(base, 'parts/testdemo'))
<BLANKLINE>
<BLANKLINE>
import zope.testrunner
<BLANKLINE>
if __name__ == '__main__':
sys.exit(zope.testrunner.run([
'--test-path', join(base, 'demo'),
])) |
zc.recipe.wrapper | 1.1.0 (2010-05-21)Fixed testsThe wrapper now uses a path relative to its location to invoke a script1.0.1 (2009-12-15)Re-release.1.0.0 (2008-09-03)First Open Source release.1.0.0a1 (2008-08-28)First internal release |
zc.recipe.zope3checkout | Recipe for creating a Zope 3 checkout in a buildout.Hopefully, when Zope is packaged as eggs, this won’t be necessary.The recipe has two options:urlThe Subversion URL to use to checkout Zope. For example, to get the 3.3
branch, use:url = svn://svn.zope.org/repos/main/branches/3.3This option is required.revisionThe revision to check out. This is optional and defaults to “HEAD”.The checkout is installed into a subdirectory of the buildout parts
directory whose name is the part name used for the recipe.This location is recorded in a ‘location’ option within the section
that other recipes can query to get the location.Change History1.2 (2007-02-09)Added support for the buildout newest mode (-N option) to avoid
checking for updates when -N is used.1.1 (2007-01-22)Feature ChangesAdded an update method to work well with current buildout versions.Bugs FixedWhen updating installs, extensions and package-includes weren’t
updated properly.1.0.dev_r68900Initial release.Download |
zc.recipe.zope3instance | This recipe creates a Zope instance that has been extended by a
collection of eggs.The recipe takes the following options:zope3The name of a section providing a Zope 3 installation definition.
This defaults to zope3. The section is required to have a
location option giving the location of the installation. This
could be a section used to install a part, like a Zope 3 checkout,
or simply a section with a location option pointing to an existing
install.databaseThe name of a section defining a zconfig option that has a zodb
section.userThe user name and password for manager usereggsOne or more requirements for distributions to be included.zcmlIf specified, provides the list of package ZCML files to include in
the instance’s package includes and their order.By default, the ZCML files normally included in package-includes
are ommitted. To includes these, include ‘*’ in the list of
includes.Each entry is a package name with an optional include type and file
name. An package name can be optionally followed by a ‘:’ and a
file name within the package. The default file name is
configure.zcml. The string ‘-meta’ can be included between the
file name and the package name. If so, then the default file name
is meta.zcml and the include will be treated as a meta include.
Similarly for ‘-overrides’. For example, the include:foo.barCauses the file named NNN-foo.bar-configure.zcml to be inserted
into package-includes containing:<include package=”foo.bar” file=”configure.zcml” />where NNN is a 3-digit number computed from the order if the entry
in the zcml option.The include:foo.bar-metaCauses the file named NNN-foo.bar-meta.zcml to be inserted
into package-includes containing:<include package=”foo.bar” file=”meta.zcml” />The include:foo.bar-overrides:x.zcmlCauses the file named NNN-foo.bar-overrides.zcml to be inserted
into package-includes containing:<include package=”foo.bar” file=”x.zcml” />To doNeed testsHopefully, for Zope 3.4, we’ll be able to make the instance-creation
process more modular, which will allow a cleaner implementation for
this recipe.Support for multiple storagesSupport for more configuration options. |
zcred | Modern authoriZation-centric credential format, ala SPKI/SDSI, Macaroons, and Vanadium |
zc.relation | Relation CatalogContentsRelation CatalogOverviewHistorySetting Up a Relation CatalogCreating the CatalogAdding RelationsSearchingQueries,findRelations, and special query valuesfindValuesand theRELATIONquery keyTokensTransitive Searching, Query Factories, andmaxDepthfindRelationChainsandtargetQueryfilterandtargetFilterSearch indexesTransitive cycles (and updating and removing relations)canFindWorking with More Complex RelationsExtrinsic Two-Way RelationsMulti-Way RelationsAdditional FunctionalityListenersTheclearMethodThecopyMethodTheignoreSearchIndexargumentfindRelationTokens()findValueTokens(INDEX_NAME)ConclusionReviewNext StepsTokens and Joins: zc.relation Catalog Extended ExampleIntroduction and Set UpOrganizationsRolesQuery Factory JoinsSearch Index for Query Factory JoinsListeners, Catalog Administration, and Joining Across Relation CatalogsWorking with Search Indexes: zc.relation Catalog Extended ExampleIntroductionTransitive Search IndexesHelpersOptimizing Relation Catalog UseChanges2.0 (2023-04-05)1.2 (2023-03-28)1.1.post2 (2018-06-18)1.1.post1 (2018-06-18)1.1 (2018-06-15)1.0 (2008-04-23)Incompatibilities with zc.relationship 1.x indexChanges and new featuresOverviewThe relation catalog can be used to optimize intransitive and transitive
searches for N-ary relations of finite, preset dimensions.For example, you can index simple two-way relations, like employee to
supervisor; RDF-style triples of subject-predicate-object; and more complex
relations such as subject-predicate-object with context and state. These
can be searched with variable definitions of transitive behavior.The catalog can be used in the ZODB or standalone. It is a generic, relatively
policy-free tool.It is expected to be used usually as an engine for more specialized and
constrained tools and APIs. Three such tools are zc.relationship containers,
plone.relations containers, and zc.vault. The documents in the package,
including this one, describe other possible uses.HistoryThis is a refactoring of the ZODB-only parts of the zc.relationship package.
Specifically, the zc.relation catalog is largely equivalent to the
zc.relationship index. The index in the zc.relationship 2.x line is an
almost-completely backwards-compatible wrapper of the zc.relation catalog.
zc.relationship will continue to be maintained, though active development is
expected to go into zc.relation.Many of the ideas come from discussions with and code from Casey Duncan, Tres
Seaver, Ken Manheimer, and more.Setting Up a Relation CatalogIn this section, we will be introducing the following ideas.Relations are objects with indexed values.You add value indexes to relation catalogs to be able to search. Values
can be identified to the catalog with callables or interface elements. The
indexed value must be specified to the catalog as a single value or a
collection.Relations and their values are stored in the catalog as tokens: unique
identifiers that you can resolve back to the original value. Integers are the
most efficient tokens, but others can work fine too.Token type determines the BTree module needed.You must define your own functions for tokenizing and resolving tokens. These
functions are registered with the catalog for the relations and for each of
their value indexes.Relations are indexed withindex.We will use a simple two way relation as our example here. A brief introduction
to a more complex RDF-style subject-predicate-object set up can be found later
in the document.Creating the CatalogImagine a two way relation from one value to another. Let’s say that we
are modeling a relation of people to their supervisors: an employee may
have a single supervisor. For this first example, the relation between
employee and supervisor will be intrinsic: the employee has a pointer to
the supervisor, and the employee object itself represents the relation.Let’s say further, for simplicity, that employee names are unique and
can be used to represent employees. We can use names as our “tokens”.Tokens are similar to the primary key in a relational database. A token is a
way to identify an object. It must sort reliably and you must be able to write
a callable that reliably resolves to the object given the right context. In
Zope 3, intids (zope.app.intid) and keyreferences (zope.app.keyreference) are
good examples of reasonable tokens.As we’ll see below, you provide a way to convert objects to tokens, and resolve
tokens to objects, for the relations, and for each value index individually.
They can be the all the same functions or completely different, depending on
your needs.For speed, integers make the best tokens; followed by other
immutables like strings; followed by non-persistent objects; followed by
persistent objects. The choice also determines a choice of BTree module, as
we’ll see below.Here is our toyEmployeeexample class. Again, we will use the employee
name as the tokens.>>> employees = {} # we'll use this to resolve the "name" tokens
>>> from functools import total_ordering
>>> @total_ordering
... class Employee(object):
... def __init__(self, name, supervisor=None):
... if name in employees:
... raise ValueError('employee with same name already exists')
... self.name = name # expect this to be readonly
... self.supervisor = supervisor
... employees[name] = self
... # the next parts just make the tests prettier
... def __repr__(self):
... return '<Employee instance "' + self.name + '">'
... def __lt__(self, other):
... return self.name < other.name
... def __eq__(self, other):
... return self is other
... def __hash__(self):
... ''' Dummy method needed because we defined __eq__
... '''
... return 1
...So, we need to define how to turn employees into their tokens. We call the
tokenization a “dump” function. Conversely, the function to resolve tokens into
objects is called a “load”.Functions to dump relations and values get several arguments. The first
argument is the object to be tokenized. Next, because it helps sometimes to
provide context, is the catalog. The last argument is a dictionary that will be
shared for a given search. The dictionary can be ignored, or used as a cache
for optimizations (for instance, to stash a utility that you looked up).For this example, our function is trivial: we said the token would be
the employee’s name.>>> def dumpEmployees(emp, catalog, cache):
... return emp.name
...If you store the relation catalog persistently (e.g., in the ZODB) be aware
that the callables you provide must be picklable–a module-level function,
for instance.We also need a way to turn tokens into employees, or “load”.The “load” functions get the token to be resolved; the catalog, for
context; and a dict cache, for optimizations of subsequent calls.You might have noticed in ourEmployee.__init__that we keep a mapping
of name to object in theemployeesglobal dict (defined right above
the class definition). We’ll use that for resolving the tokens.>>> def loadEmployees(token, catalog, cache):
... return employees[token]
...Now we know enough to get started with a catalog. We’ll instantiate it
by specifying how to tokenize relations, and what kind of BTree modules
should be used to hold the tokens.How do you pick BTree modules?If the tokens are 32-bit ints, chooseBTrees.family32.II,BTrees.family32.IForBTrees.family32.IO.If the tokens are 64 bit ints, chooseBTrees.family64.II,BTrees.family64.IForBTrees.family64.IO.If they are anything else, chooseBTrees.family32.OI,BTrees.family64.OI, orBTrees.family32.OO(orBTrees.family64.OO–they are the same).Within these rules, the choice is somewhat arbitrary unless you plan to merge
these results with that of another source that is using a particular BTree
module. BTree set operations only work within the same module, so you must
match module to module. The catalog defaults to IF trees, because that’s what
standard zope catalogs use. That’s as reasonable a choice as any, and will
potentially come in handy if your tokens are in fact the same as those used by
the zope catalog and you want to do some set operations.In this example, our tokens are strings, so we want OO or an OI variant. We’ll
choose BTrees.family32.OI, arbitrarily.>>> import zc.relation.catalog
>>> import BTrees
>>> catalog = zc.relation.catalog.Catalog(dumpEmployees, loadEmployees,
... btree=BTrees.family32.OI)[1][1]The catalog provides ICatalog.>>> from zope.interface.verify import verifyObject
>>> import zc.relation.interfaces
>>> verifyObject(zc.relation.interfaces.ICatalog, catalog)
True[2][2]Old instances of zc.relationship indexes, which in the newest
version subclass a zc.relation Catalog, used to have a dict in an
internal data structure. We specify that here so that the code that
converts the dict to an OOBTree can have a chance to run.>>> catalog._attrs = dict(catalog._attrs)Look! A relation catalog! We can’t do very
much searching with it so far though, because the catalog doesn’t have any
indexes.In this example, the relation itself represents the employee, so we won’t need
to index that separately.But we do need a way to tell the catalog how to find the other end of the
relation, the supervisor. You can specify this to the catalog with an attribute
or method specified fromzope.interface Interface, or with a callable.
We’ll use a callable for now. The callable will receive the indexed relation
and the catalog for context.>>> def supervisor(emp, catalog):
... return emp.supervisor # None or another employee
...We’ll also need to specify how to tokenize (dump and load) those values. In
this case, we’re able to use the same functions as the relations themselves.
However, do note that we can specify a completely different way to dump and
load for each “value index,” or relation element.We could also specify the name to call the index, but it will default to the__name__of the function (or interface element), which will work just fine
for us now.Now we can add the “supervisor” value index.>>> catalog.addValueIndex(supervisor, dumpEmployees, loadEmployees,
... btree=BTrees.family32.OI)Now we have an index[3].[3]Adding a value index can generate several
exceptions.You must supply both of dump and load or neither.>>> catalog.addValueIndex(supervisor, dumpEmployees, None,
... btree=BTrees.family32.OI, name='supervisor2')
Traceback (most recent call last):
...
ValueError: either both of 'dump' and 'load' must be None, or neitherIn this example, even if we fix it, we’ll get an error, because we have
already indexed the supervisor function.>>> catalog.addValueIndex(supervisor, dumpEmployees, loadEmployees,
... btree=BTrees.family32.OI, name='supervisor2')
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ('element already indexed', <function supervisor at ...>)You also can’t add a different function under the same name.>>> def supervisor2(emp, catalog):
... return emp.supervisor # None or another employee
...
>>> catalog.addValueIndex(supervisor2, dumpEmployees, loadEmployees,
... btree=BTrees.family32.OI, name='supervisor')
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ('name already used', 'supervisor')Finally, if your function does not have a__name__and you do not
provide one, you may not add an index.>>> class Supervisor3(object):
... __name__ = None
... def __call__(klass, emp, catalog):
... return emp.supervisor
...
>>> supervisor3 = Supervisor3()
>>> supervisor3.__name__
>>> catalog.addValueIndex(supervisor3, dumpEmployees, loadEmployees,
... btree=BTrees.family32.OI)
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: no name specified>>> [info['name'] for info in catalog.iterValueIndexInfo()]
['supervisor']Adding RelationsNow let’s create a few employees. All but one will have supervisors.
If you recall our toyEmployeeclass, the first argument to the
constructor is the employee name (and therefore the token), and the
optional second argument is the supervisor.>>> a = Employee('Alice')
>>> b = Employee('Betty', a)
>>> c = Employee('Chuck', a)
>>> d = Employee('Diane', b)
>>> e = Employee('Edgar', b)
>>> f = Employee('Frank', c)
>>> g = Employee('Galyn', c)
>>> h = Employee('Howie', d)Here is a diagram of the hierarchy.Alice
__/ \__
Betty Chuck
/ \ / \
Diane Edgar Frank Galyn
|
HowieLet’s tell the catalog about the relations, using theindexmethod.>>> for emp in (a,b,c,d,e,f,g,h):
... catalog.index(emp)
...We’ve now created the relation catalog and added relations to it. We’re ready
to search!SearchingIn this section, we will introduce the following ideas.Queries to the relation catalog are formed with dicts.Query keys are the names of the indexes you want to search, or, for the
special case of precise relations, thezc.relation.RELATIONconstant.Query values are the tokens of the results you want to match; orNone,
indicating relations that haveNoneas a value (or an empty collection,
if it is a multiple). Search values can usezc.relation.catalog.any(args)orzc.relation.catalog.Any(args)to
specify multiple (non-None) results to match for a given key.The index has a variety of methods to help you work with tokens.tokenizeQueryis typically the most used, though others are available.To find relations that match a query, usefindRelationsorfindRelationTokens.To find values that match a query, usefindValuesorfindValueTokens.You search transitively by using a query factory. Thezc.relation.queryfactory.TransposingTransitiveis a good common case
factory that lets you walk up and down a hierarchy. A query factory can be
passed in as an argument to search methods as aqueryFactory, or
installed as a default behavior usingaddDefaultQueryFactory.To find how a query is related, usefindRelationChainsorfindRelationTokenChains.To find out if a query is related, usecanFind.Circular transitive relations are handled to prevent infinite loops. They
are identified infindRelationChainsandfindRelationTokenChainswith
azc.relation.interfaces.ICircularRelationPathmarker interface.search methods share the following arguments:maxDepth, limiting the transitive depth for searches;filter, allowing code to filter transitive paths;targetQuery, allowing a query to filter transitive paths on the basis
of the endpoint;targetFilter, allowing code to filter transitive paths on the basis of
the endpoint; andqueryFactory, mentioned above.You can set up search indexes to speed up specific transitive searches.Queries,findRelations, and special query valuesSo who works for Alice? That means we want to get the relations–the
employees–with asupervisorof Alice.The heart of a question to the catalog is a query. A query is spelled
as a dictionary. The main idea is simply that keys in a dictionary
specify index names, and the values specify the constraints.The values in a query are always expressed with tokens. The catalog has
several helpers to make this less onerous, but for now let’s take
advantage of the fact that our tokens are easily comprehensible.>>> sorted(catalog.findRelations({'supervisor': 'Alice'}))
[<Employee instance "Betty">, <Employee instance "Chuck">]Alice is the direct (intransitive) boss of Betty and Chuck.What if you want to ask “who doesn’t report to anyone?” Then you want to
ask for a relation in which the supervisor is None.>>> list(catalog.findRelations({'supervisor': None}))
[<Employee instance "Alice">]Alice is the only employee who doesn’t report to anyone.What if you want to ask “who reports to Diane or Chuck?” Then you use the
zc.relationAnyclass oranyfunction to pass the multiple values.>>> sorted(catalog.findRelations(
... {'supervisor': zc.relation.catalog.any('Diane', 'Chuck')}))
... # doctest: +NORMALIZE_WHITESPACE
[<Employee instance "Frank">, <Employee instance "Galyn">,
<Employee instance "Howie">]Frank, Galyn, and Howie each report to either Diane or Chuck.[4][4]Anycan be compared.>>> zc.relation.catalog.any('foo', 'bar', 'baz')
<zc.relation.catalog.Any instance ('bar', 'baz', 'foo')>
>>> (zc.relation.catalog.any('foo', 'bar', 'baz') ==
... zc.relation.catalog.any('bar', 'foo', 'baz'))
True
>>> (zc.relation.catalog.any('foo', 'bar', 'baz') !=
... zc.relation.catalog.any('bar', 'foo', 'baz'))
False
>>> (zc.relation.catalog.any('foo', 'bar', 'baz') ==
... zc.relation.catalog.any('foo', 'baz'))
False
>>> (zc.relation.catalog.any('foo', 'bar', 'baz') !=
... zc.relation.catalog.any('foo', 'baz'))
TruefindValuesand theRELATIONquery keySo how do we find who an employee’s supervisor is? Well, in this case,
look at the attribute on the employee! If you can use an attribute that
will usually be a win in the ZODB.>>> h.supervisor
<Employee instance "Diane">Again, as we mentioned at the start of this first example, the knowledge
of a supervisor is “intrinsic” to the employee instance. It is
possible, and even easy, to ask the catalog this kind of question, but
the catalog syntax is more geared to “extrinsic” relations, such as the
one from the supervisor to the employee: the connection between a
supervisor object and its employees is extrinsic to the supervisor, so
you actually might want a catalog to find it!However, we will explore the syntax very briefly, because it introduces an
important pair of search methods, and because it is a stepping stone
to our first transitive search.So, o relation catalog, who is Howie’s supervisor?To ask this question we want to get the indexed values off of the relations:findValues. In its simplest form, the arguments are the index name of the
values you want, and a query to find the relations that have the desired
values.What about the query? Above, we noted that the keys in a query are the names of
the indexes to search. However, in this case, we don’t want to search one or
more indexes for matching relations, as usual, but actually specify a relation:
Howie.We do not have a value index name: we are looking for a relation. The query
key, then, should be the constantzc.relation.RELATION. For our current
example, that would mean the query is{zc.relation.RELATION: 'Howie'}.>>> import zc.relation
>>> list(catalog.findValues(
... 'supervisor', {zc.relation.RELATION: 'Howie'}))[0]
<Employee instance "Diane">Congratulations, you just found an obfuscated and comparitively
inefficient way to writehowie.supervisor![5][5]Here’s the same with token results.>>> list(catalog.findValueTokens('supervisor',
... {zc.relation.RELATION: 'Howie'}))
['Diane']While we’re down here in the footnotes, I’ll mention that you can
search for relations that haven’t been indexed.>>> list(catalog.findRelationTokens({zc.relation.RELATION: 'Ygritte'}))
[]
>>> list(catalog.findRelations({zc.relation.RELATION: 'Ygritte'}))
[][6][6]If you usefindValuesorfindValueTokensand
try to specify a value name that is not indexed, you get a ValueError.>>> catalog.findValues('foo')
Traceback (most recent call last):
...
ValueError: ('name not indexed', 'foo')Slightly more usefully, you can use other query keys along with
zc.relation.RELATION. This asks, “Of Betty, Alice, and Frank, who are
supervised by Alice?”>>> sorted(catalog.findRelations(
... {zc.relation.RELATION: zc.relation.catalog.any(
... 'Betty', 'Alice', 'Frank'),
... 'supervisor': 'Alice'}))
[<Employee instance "Betty">]Only Betty is.TokensAs mentioned above, the catalog provides several helpers to work with tokens.
The most frequently used istokenizeQuery, which takes a query with object
values and converts them to tokens using the “dump” functions registered for
the relations and indexed values. Here are alternate spellings of some of the
queries we’ve encountered above.>>> catalog.tokenizeQuery({'supervisor': a})
{'supervisor': 'Alice'}
>>> catalog.tokenizeQuery({'supervisor': None})
{'supervisor': None}
>>> import pprint
>>> result = catalog.tokenizeQuery(
... {zc.relation.RELATION: zc.relation.catalog.any(a, b, f),
... 'supervisor': a}) # doctest: +NORMALIZE_WHITESPACE
>>> pprint.pprint(result)
{None: <zc.relation.catalog.Any instance ('Alice', 'Betty', 'Frank')>,
'supervisor': 'Alice'}(If you are wondering about thatNonein the last result, yes,zc.relation.RELATIONis just readability sugar forNone.)So, here’s a real search usingtokenizeQuery. We’ll make an alias forcatalog.tokenizeQueryjust to shorten things up a bit.>>> query = catalog.tokenizeQuery
>>> sorted(catalog.findRelations(query(
... {zc.relation.RELATION: zc.relation.catalog.any(a, b, f),
... 'supervisor': a})))
[<Employee instance "Betty">]The catalog always has parallel search methods, one for finding objects, as
seen above, and one for finding tokens (the only exception iscanFind,
described below). Finding tokens can be much more efficient, especially if the
result from the relation catalog is just one step along the path of finding
your desired result. But finding objects is simpler for some common cases.
Here’s a quick example of some queries above, getting tokens rather than
objects.You can also spell a query intokenizeQuerywith keyword arguments. This
won’t work if your key iszc.relation.RELATION, but otherwise it can
improve readability. We’ll see some examples of this below as well.>>> sorted(catalog.findRelationTokens(query(supervisor=a)))
['Betty', 'Chuck']>>> sorted(catalog.findRelationTokens({'supervisor': None}))
['Alice']>>> sorted(catalog.findRelationTokens(
... query(supervisor=zc.relation.catalog.any(c, d))))
['Frank', 'Galyn', 'Howie']>>> sorted(catalog.findRelationTokens(
... query({zc.relation.RELATION: zc.relation.catalog.any(a, b, f),
... 'supervisor': a})))
['Betty']The catalog provides several other methods just for working with tokens.resolveQuery: the inverse oftokenizeQuery, converting a
tokenizedquery to a query with objects.tokenizeValues: returns an iterable of tokens for the values of the given
index name.resolveValueTokens: returns an iterable of values for the tokens of the
given index name.tokenizeRelation: returns a token for the given relation.resolveRelationToken: returns a relation for the given token.tokenizeRelations: returns an iterable of tokens for the relations given.resolveRelationTokens: returns an iterable of relations for the tokens
given.These methods are lesser used, and described in more technical documents in
this package.Transitive Searching, Query Factories, andmaxDepthSo, we’ve seen a lot of one-level, intransitive searching. What about
transitive searching? Well, you need to tell the catalog how to walk the tree.
In simple (and very common) cases like this, thezc.relation.queryfactory.TransposingTransitivewill do the trick.A transitive query factory is just a callable that the catalog uses to
ask “I got this query, and here are the results I found. I’m supposed to
walk another step transitively, so what query should I search for next?”
Writing a factory is more complex than we want to talk about right now,
but using theTransposingTransitiveQueryFactoryis easy. You just tell
it the two query names it should transpose for walking in either
direction.For instance, here we just want to tell the factory to transpose the two keys
we’ve used,zc.relation.RELATIONand ‘supervisor’. Let’s make a factory,
use it in a query for a couple of transitive searches, and then, if you want,
you can read through a footnote to talk through what is happening.Here’s the factory.>>> import zc.relation.queryfactory
>>> factory = zc.relation.queryfactory.TransposingTransitive(
... zc.relation.RELATION, 'supervisor')Nowfactoryis just a callable. Let’s let it help answer a couple of
questions.Who are all of Howie’s supervisors transitively (this looks up in the
diagram)?>>> list(catalog.findValues('supervisor', {zc.relation.RELATION: 'Howie'},
... queryFactory=factory))
... # doctest: +NORMALIZE_WHITESPACE
[<Employee instance "Diane">, <Employee instance "Betty">,
<Employee instance "Alice">]Who are all of the people Betty supervises transitively, breadth first (this
looks down in the diagram)?>>> people = list(catalog.findRelations(
... {'supervisor': 'Betty'}, queryFactory=factory))
>>> sorted(people[:2])
[<Employee instance "Diane">, <Employee instance "Edgar">]
>>> people[2]
<Employee instance "Howie">Yup, that looks right. So how did that work? If you care, read this
footnote.[13]This transitive factory is really the only transitive factory you would
want for this particular catalog, so it probably is safe to wire it in
as a default. You can add multiple query factories to match different
queries usingaddDefaultQueryFactory.>>> catalog.addDefaultQueryFactory(factory)Now all searches are transitive by default.>>> list(catalog.findValues('supervisor', {zc.relation.RELATION: 'Howie'}))
... # doctest: +NORMALIZE_WHITESPACE
[<Employee instance "Diane">, <Employee instance "Betty">,
<Employee instance "Alice">]
>>> people = list(catalog.findRelations({'supervisor': 'Betty'}))
>>> sorted(people[:2])
[<Employee instance "Diane">, <Employee instance "Edgar">]
>>> people[2]
<Employee instance "Howie">We can force a non-transitive search, or a specific search depth, withmaxDepth[7].[7]A search with amaxDepth> 1 but
noqueryFactoryraises an error.>>> catalog.removeDefaultQueryFactory(factory)
>>> catalog.findRelationTokens({'supervisor': 'Diane'}, maxDepth=3)
Traceback (most recent call last):
...
ValueError: if maxDepth not in (None, 1), queryFactory must be available>>> catalog.addDefaultQueryFactory(factory)>>> list(catalog.findValues(
... 'supervisor', {zc.relation.RELATION: 'Howie'}, maxDepth=1))
[<Employee instance "Diane">]
>>> sorted(catalog.findRelations({'supervisor': 'Betty'}, maxDepth=1))
[<Employee instance "Diane">, <Employee instance "Edgar">][8][8]maxDepthmust be None or a positive integer, or
else you’ll get a value error.>>> catalog.findRelations({'supervisor': 'Betty'}, maxDepth=0)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer>>> catalog.findRelations({'supervisor': 'Betty'}, maxDepth=-1)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integerWe’ll introduce some other available search
arguments later in this document and in other documents. It’s important
to note thatall search methods share the same arguments as
``findRelations``.findValuesandfindValueTokensonly add the
initial argument of specifying the desired value.We’ve looked at two search methods so far: thefindValuesandfindRelationsmethods help you ask what is related. But what if you
want to knowhowthings are transitively related?findRelationChainsandtargetQueryAnother search method,findRelationChains, helps you discover how
things are transitively related.The method name says “find relation chains”. But what is a “relation
chain”? In this API, it is a transitive path of relations. For
instance, what’s the chain of command above Howie?findRelationChainswill return each unique path.>>> list(catalog.findRelationChains({zc.relation.RELATION: 'Howie'}))
... # doctest: +NORMALIZE_WHITESPACE
[(<Employee instance "Howie">,),
(<Employee instance "Howie">, <Employee instance "Diane">),
(<Employee instance "Howie">, <Employee instance "Diane">,
<Employee instance "Betty">),
(<Employee instance "Howie">, <Employee instance "Diane">,
<Employee instance "Betty">, <Employee instance "Alice">)]Look at that result carefully. Notice that the result is an iterable of
tuples. Each tuple is a unique chain, which may be a part of a
subsequent chain. In this case, the last chain is the longest and the
most comprehensive.What if we wanted to see all the paths from Alice? That will be one
chain for each supervised employee, because it shows all possible paths.>>> sorted(catalog.findRelationChains(
... {'supervisor': 'Alice'}))
... # doctest: +NORMALIZE_WHITESPACE
[(<Employee instance "Betty">,),
(<Employee instance "Betty">, <Employee instance "Diane">),
(<Employee instance "Betty">, <Employee instance "Diane">,
<Employee instance "Howie">),
(<Employee instance "Betty">, <Employee instance "Edgar">),
(<Employee instance "Chuck">,),
(<Employee instance "Chuck">, <Employee instance "Frank">),
(<Employee instance "Chuck">, <Employee instance "Galyn">)]That’s all the paths–all the chains–from Alice. We sorted the results,
but normally they would be breadth first.But what if we wanted to just find the paths from one query result to
another query result–say, we wanted to know the chain of command from Alice
down to Howie? Then we can specify atargetQuerythat specifies the
characteristics of our desired end point (or points).>>> list(catalog.findRelationChains(
... {'supervisor': 'Alice'},
... targetQuery={zc.relation.RELATION: 'Howie'}))
... # doctest: +NORMALIZE_WHITESPACE
[(<Employee instance "Betty">, <Employee instance "Diane">,
<Employee instance "Howie">)]So, Betty supervises Diane, who supervises Howie.Note thattargetQuerynow joinsmaxDepthin our collection of shared
search arguments that we have introduced.filterandtargetFilterWe can take a quick look now at the last of the two shared search arguments:filterandtargetFilter. These two are similar in that they both are
callables that can approve or reject given relations in a search based on
whatever logic you can code. They differ in thatfilterstops any further
transitive searches from the relation, whiletargetFiltermerely omits the
given result but allows further search from it. LiketargetQuery, then,targetFilteris good when you want to specify the other end of a path.As an example, let’s say we only want to return female employees.>>> female_employees = ('Alice', 'Betty', 'Diane', 'Galyn')
>>> def female_filter(relchain, query, catalog, cache):
... return relchain[-1] in female_employees
...Here are all the female employees supervised by Alice transitively, usingtargetFilter.>>> list(catalog.findRelations({'supervisor': 'Alice'},
... targetFilter=female_filter))
... # doctest: +NORMALIZE_WHITESPACE
[<Employee instance "Betty">, <Employee instance "Diane">,
<Employee instance "Galyn">]Here are all the female employees supervised by Chuck.>>> list(catalog.findRelations({'supervisor': 'Chuck'},
... targetFilter=female_filter))
[<Employee instance "Galyn">]The same method used as a filter will only return females directly
supervised by other females–not Galyn, in this case.>>> list(catalog.findRelations({'supervisor': 'Alice'},
... filter=female_filter))
[<Employee instance "Betty">, <Employee instance "Diane">]These can be combined with one another, and with the other search
arguments[9].[9]For instance:>>> list(catalog.findRelationTokens(
... {'supervisor': 'Alice'}, targetFilter=female_filter,
... targetQuery={zc.relation.RELATION: 'Galyn'}))
['Galyn']
>>> list(catalog.findRelationTokens(
... {'supervisor': 'Alice'}, targetFilter=female_filter,
... targetQuery={zc.relation.RELATION: 'Not known'}))
[]
>>> arbitrary = ['Alice', 'Chuck', 'Betty', 'Galyn']
>>> def arbitrary_filter(relchain, query, catalog, cache):
... return relchain[-1] in arbitrary
>>> list(catalog.findRelationTokens({'supervisor': 'Alice'},
... filter=arbitrary_filter,
... targetFilter=female_filter))
['Betty', 'Galyn']Search indexesWithout setting up any additional indexes, the transitive behavior of
thefindRelationsandfindValuesmethods essentially relies on the
brute force searches offindRelationChains. Results are iterables
that are gradually computed. For instance, let’s repeat the question
“Whom does Betty supervise?”. Notice thatresfirst populates a list
with three members, but then does not populate a second list. The
iterator has been exhausted.>>> res = catalog.findRelationTokens({'supervisor': 'Betty'})
>>> unindexed = sorted(res)
>>> len(unindexed)
3
>>> len(list(res)) # iterator is exhausted
0The brute force of this approach can be sufficient in many cases, but
sometimes speed for these searches is critical. In these cases, you can
add a “search index”. A search index speeds up the result of one or
more precise searches by indexing the results. Search indexes can
affect the results of searches with aqueryFactoryinfindRelations,findValues, and the soon-to-be-introducedcanFind, but they do not
affectfindRelationChains.The zc.relation package currently includes two kinds of search indexes, one for
indexing transitive membership searches in a hierarchy and one for intransitive
searches explored in tokens.rst in this package, which can optimize frequent
searches on complex queries or can effectively change the meaning of an
intransitive search. Other search index implementations and approaches may be
added in the future.Here’s a very brief example of adding a search index for the transitive
searches seen above that specify a ‘supervisor’.>>> import zc.relation.searchindex
>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... 'supervisor', zc.relation.RELATION))Thezc.relation.RELATIONdescribes how to walk back up the chain. Search
indexes are explained in reasonable detail in searchindex.rst.Now that we have added the index, we can search again. The result this
time is already computed, so, at least when you ask for tokens, it
is repeatable.>>> res = catalog.findRelationTokens({'supervisor': 'Betty'})
>>> len(list(res))
3
>>> len(list(res))
3
>>> sorted(res) == unindexed
TrueNote that the breadth-first sorting is lost when an index is used[10].[10]The scenario we are looking at in this document shows a case
in which special logic in the search index needs to address updates.
For example, if we move Howie from DianeAlice
__/ \__
Betty Chuck
/ \ / \
Diane Edgar Frank Galyn
|
Howieto GalynAlice
__/ \__
Betty Chuck
/ \ / \
Diane Edgar Frank Galyn
|
Howiethen the search index is correct both for the new location and the old.>>> h.supervisor = g
>>> catalog.index(h)
>>> list(catalog.findRelationTokens({'supervisor': 'Diane'}))
[]
>>> list(catalog.findRelationTokens({'supervisor': 'Betty'}))
['Diane', 'Edgar']
>>> list(catalog.findRelationTokens({'supervisor': 'Chuck'}))
['Frank', 'Galyn', 'Howie']
>>> list(catalog.findRelationTokens({'supervisor': 'Galyn'}))
['Howie']
>>> h.supervisor = d
>>> catalog.index(h) # move him back
>>> list(catalog.findRelationTokens({'supervisor': 'Galyn'}))
[]
>>> list(catalog.findRelationTokens({'supervisor': 'Diane'}))
['Howie']Transitive cycles (and updating and removing relations)The transitive searches and the provided search indexes can handle
cycles. Cycles are less likely in the current example than some others,
but we can stretch the case a bit: imagine a “king in disguise”, in
which someone at the top works lower in the hierarchy. Perhaps Alice
works for Zane, who works for Betty, who works for Alice. Artificial,
but easy enough to draw:______
/ \
/ Zane
/ |
/ Alice
/ __/ \__
/ Betty__ Chuck
\-/ / \ / \
Diane Edgar Frank Galyn
|
HowieEasy to create too.>>> z = Employee('Zane', b)
>>> a.supervisor = zNow we have a cycle. Of course, we have not yet told the catalog about it.indexcan be used both to reindex Alice and index Zane.>>> catalog.index(a)
>>> catalog.index(z)Now, if we ask who works for Betty, we get the entire tree. (We’ll ask
for tokens, just so that the result is smaller to look at.)[11][11]The result of the query for Betty, Alice, and Zane are all the
same.>>> res1 = catalog.findRelationTokens({'supervisor': 'Betty'})
>>> res2 = catalog.findRelationTokens({'supervisor': 'Alice'})
>>> res3 = catalog.findRelationTokens({'supervisor': 'Zane'})
>>> list(res1) == list(res2) == list(res3)
TrueThe cycle doesn’t pollute the index outside of the cycle.>>> res = catalog.findRelationTokens({'supervisor': 'Diane'})
>>> list(res)
['Howie']
>>> list(res) # it isn't lazy, it is precalculated
['Howie']>>> sorted(catalog.findRelationTokens({'supervisor': 'Betty'}))
... # doctest: +NORMALIZE_WHITESPACE
['Alice', 'Betty', 'Chuck', 'Diane', 'Edgar', 'Frank', 'Galyn', 'Howie',
'Zane']If we ask for the supervisors of Frank, it will include Betty.>>> list(catalog.findValueTokens(
... 'supervisor', {zc.relation.RELATION: 'Frank'}))
['Chuck', 'Alice', 'Zane', 'Betty']Paths returned byfindRelationChainsare marked with special interfaces,
and special metadata, to show the chain.>>> res = list(catalog.findRelationChains({zc.relation.RELATION: 'Frank'}))
>>> len(res)
5
>>> import zc.relation.interfaces
>>> [zc.relation.interfaces.ICircularRelationPath.providedBy(r)
... for r in res]
[False, False, False, False, True]Here’s the last chain:>>> res[-1] # doctest: +NORMALIZE_WHITESPACE
cycle(<Employee instance "Frank">, <Employee instance "Chuck">,
<Employee instance "Alice">, <Employee instance "Zane">,
<Employee instance "Betty">)The chain’s ‘cycled’ attribute has a list of queries that create a cycle.
If you run the query, or queries, you see where the cycle would
restart–where the path would have started to overlap. Sometimes the query
results will include multiple cycles, and some paths that are not cycles.
In this case, there’s only a single cycled query, which results in a single
cycled relation.>>> len(res[4].cycled)
1>>> list(catalog.findRelations(res[4].cycled[0], maxDepth=1))
[<Employee instance "Alice">]To remove this craziness[12], we can unindex Zane, and change
and reindex Alice.[12]If you want to, look what happens when you go the
other way:>>> res = list(catalog.findRelationChains({'supervisor': 'Zane'}))
>>> def sortEqualLenByName(one):
... return len(one), one
...
>>> res.sort(key=sortEqualLenByName) # normalizes for test stability
>>> from __future__ import print_function
>>> print(res) # doctest: +NORMALIZE_WHITESPACE
[(<Employee instance "Alice">,),
(<Employee instance "Alice">, <Employee instance "Betty">),
(<Employee instance "Alice">, <Employee instance "Chuck">),
(<Employee instance "Alice">, <Employee instance "Betty">,
<Employee instance "Diane">),
(<Employee instance "Alice">, <Employee instance "Betty">,
<Employee instance "Edgar">),
cycle(<Employee instance "Alice">, <Employee instance "Betty">,
<Employee instance "Zane">),
(<Employee instance "Alice">, <Employee instance "Chuck">,
<Employee instance "Frank">),
(<Employee instance "Alice">, <Employee instance "Chuck">,
<Employee instance "Galyn">),
(<Employee instance "Alice">, <Employee instance "Betty">,
<Employee instance "Diane">, <Employee instance "Howie">)]>>> [zc.relation.interfaces.ICircularRelationPath.providedBy(r)
... for r in res]
[False, False, False, False, False, True, False, False, False]
>>> len(res[5].cycled)
1
>>> list(catalog.findRelations(res[5].cycled[0], maxDepth=1))
[<Employee instance "Alice">]>>> a.supervisor = None
>>> catalog.index(a)>>> list(catalog.findValueTokens(
... 'supervisor', {zc.relation.RELATION: 'Frank'}))
['Chuck', 'Alice']>>> catalog.unindex(z)>>> sorted(catalog.findRelationTokens({'supervisor': 'Betty'}))
['Diane', 'Edgar', 'Howie']canFindWe’re to the last search method:canFind. We’ve gotten values and
relations, but what if you simply want to know if there is any
connection at all? For instance, is Alice a supervisor of Howie? Is
Chuck? To answer these questions, you can use thecanFindmethod
combined with thetargetQuerysearch argument.ThecanFindmethod takes the same arguments as findRelations. However,
it simply returns a boolean about whether the search has any results. This
is a convenience that also allows some extra optimizations.Does Betty supervise anyone?>>> catalog.canFind({'supervisor': 'Betty'})
TrueWhat about Howie?>>> catalog.canFind({'supervisor': 'Howie'})
FalseWhat about…Zane (no longer an employee)?>>> catalog.canFind({'supervisor': 'Zane'})
FalseIf we want to know if Alice or Chuck supervise Howie, then we want to specify
characteristics of two points on a path. To ask a question about the other
end of a path, usetargetQuery.Is Alice a supervisor of Howie?>>> catalog.canFind({'supervisor': 'Alice'},
... targetQuery={zc.relation.RELATION: 'Howie'})
TrueIs Chuck a supervisor of Howie?>>> catalog.canFind({'supervisor': 'Chuck'},
... targetQuery={zc.relation.RELATION: 'Howie'})
FalseIs Howie Alice’s employee?>>> catalog.canFind({zc.relation.RELATION: 'Howie'},
... targetQuery={'supervisor': 'Alice'})
TrueIs Howie Chuck’s employee?>>> catalog.canFind({zc.relation.RELATION: 'Howie'},
... targetQuery={'supervisor': 'Chuck'})
False(Note that, if your relations describe a hierarchy, searching up a hierarchy is
usually more efficient than searching down, so the second pair of questions is
generally preferable to the first in that case.)Working with More Complex RelationsSo far, our examples have used a simple relation, in which the indexed object
is one end of the relation, and the indexed value on the object is the other.
This example has let us look at all of the basic zc.relation catalog
functionality.As mentioned in the introduction, though, the catalog supports, and was
designed for, more complex relations. This section will quickly examine a
few examples of other uses.In this section, we will see several examples of ideas mentioned above but not
yet demonstrated.We can use interface attributes (values or callables) to define value
indexes.Using interface attributes will cause an attempt to adapt the relation if it
does not already provide the interface.We can use themultipleargument when defining a value index to indicate
that the indexed value is a collection.We can use thenameargument when defining a value index to specify the
name to be used in queries, rather than relying on the name of the interface
attribute or callable.Thefamilyargument in instantiating the catalog lets you change the
default btree family for relations and value indexes fromBTrees.family32.IFtoBTrees.family64.IF.Extrinsic Two-Way RelationsA simple variation of our current story is this: what if the indexed relation
were between two other objects–that is, what if the relation were extrinsic to
both participants?Let’s imagine we have relations that show biological parentage. We’ll want a
“Person” and a “Parentage” relation. We’ll define an interface forIParentageso we can see how using an interface to define a value index
works.>>> class Person(object):
... def __init__(self, name):
... self.name = name
... def __repr__(self):
... return '<Person %r>' % (self.name,)
...
>>> import zope.interface
>>> class IParentage(zope.interface.Interface):
... child = zope.interface.Attribute('the child')
... parents = zope.interface.Attribute('the parents')
...
>>> @zope.interface.implementer(IParentage)
... class Parentage(object):
...
... def __init__(self, child, parent1, parent2):
... self.child = child
... self.parents = (parent1, parent2)
...Now we’ll define the dumpers and loaders and then the catalog. Notice that
we are relying on a pattern: the dump must be called before the load.>>> _people = {}
>>> _relations = {}
>>> def dumpPeople(obj, catalog, cache):
... if _people.setdefault(obj.name, obj) is not obj:
... raise ValueError('we are assuming names are unique')
... return obj.name
...
>>> def loadPeople(token, catalog, cache):
... return _people[token]
...
>>> def dumpRelations(obj, catalog, cache):
... if _relations.setdefault(id(obj), obj) is not obj:
... raise ValueError('huh?')
... return id(obj)
...
>>> def loadRelations(token, catalog, cache):
... return _relations[token]
...
>>> catalog = zc.relation.catalog.Catalog(dumpRelations, loadRelations, family=BTrees.family64)
>>> catalog.addValueIndex(IParentage['child'], dumpPeople, loadPeople,
... btree=BTrees.family32.OO)
>>> catalog.addValueIndex(IParentage['parents'], dumpPeople, loadPeople,
... btree=BTrees.family32.OO, multiple=True,
... name='parent')
>>> catalog.addDefaultQueryFactory(
... zc.relation.queryfactory.TransposingTransitive(
... 'child', 'parent'))Now we have a catalog fully set up. Let’s add some relations.>>> a = Person('Alice')
>>> b = Person('Betty')
>>> c = Person('Charles')
>>> d = Person('Donald')
>>> e = Person('Eugenia')
>>> f = Person('Fred')
>>> g = Person('Gertrude')
>>> h = Person('Harry')
>>> i = Person('Iphigenia')
>>> j = Person('Jacob')
>>> k = Person('Karyn')
>>> l = Person('Lee')>>> r1 = Parentage(child=j, parent1=k, parent2=l)
>>> r2 = Parentage(child=g, parent1=i, parent2=j)
>>> r3 = Parentage(child=f, parent1=g, parent2=h)
>>> r4 = Parentage(child=e, parent1=g, parent2=h)
>>> r5 = Parentage(child=b, parent1=e, parent2=d)
>>> r6 = Parentage(child=a, parent1=e, parent2=c)Here’s that in one of our hierarchy diagrams.Karyn Lee
\ /
Jacob Iphigenia
\ /
Gertrude Harry
\ /
/-------\
Fred Eugenia
Donald / \ Charles
\ / \ /
Betty AliceNow we can index the relations, and ask some questions.>>> for r in (r1, r2, r3, r4, r5, r6):
... catalog.index(r)
>>> query = catalog.tokenizeQuery
>>> sorted(catalog.findValueTokens(
... 'parent', query(child=a), maxDepth=1))
['Charles', 'Eugenia']
>>> sorted(catalog.findValueTokens('parent', query(child=g)))
['Iphigenia', 'Jacob', 'Karyn', 'Lee']
>>> sorted(catalog.findValueTokens(
... 'child', query(parent=h), maxDepth=1))
['Eugenia', 'Fred']
>>> sorted(catalog.findValueTokens('child', query(parent=h)))
['Alice', 'Betty', 'Eugenia', 'Fred']
>>> catalog.canFind(query(parent=h), targetQuery=query(child=d))
False
>>> catalog.canFind(query(parent=l), targetQuery=query(child=b))
TrueMulti-Way RelationsThe previous example quickly showed how to set the catalog up for a completely
extrinsic two-way relation. The same pattern can be extended for N-way
relations. For example, consider a four way relation in the form of
SUBJECTS PREDICATE OBJECTS [in CONTEXT]. For instance, we might
want to say “(joe,) SELLS (doughnuts, coffee) in corner_store”, where “(joe,)”
is the collection of subjects, “SELLS” is the predicate, “(doughnuts, coffee)”
is the collection of objects, and “corner_store” is the optional context.For this last example, we’ll integrate two components we haven’t seen examples
of here before: the ZODB and adaptation.Our example ZODB approach uses OIDs as the tokens. this might be OK in some
cases, if you will never support multiple databases and you don’t need an
abstraction layer so that a different object can have the same identifier.>>> import persistent
>>> import struct
>>> class Demo(persistent.Persistent):
... def __init__(self, name):
... self.name = name
... def __repr__(self):
... return '<Demo instance %r>' % (self.name,)
...
>>> class IRelation(zope.interface.Interface):
... subjects = zope.interface.Attribute('subjects')
... predicate = zope.interface.Attribute('predicate')
... objects = zope.interface.Attribute('objects')
...
>>> class IContextual(zope.interface.Interface):
... def getContext():
... 'return context'
... def setContext(value):
... 'set context'
...
>>> @zope.interface.implementer(IContextual)
... class Contextual(object):
...
... _context = None
... def getContext(self):
... return self._context
... def setContext(self, value):
... self._context = value
...
>>> @zope.interface.implementer(IRelation)
... class Relation(persistent.Persistent):
...
... def __init__(self, subjects, predicate, objects):
... self.subjects = subjects
... self.predicate = predicate
... self.objects = objects
... self._contextual = Contextual()
...
... def __conform__(self, iface):
... if iface is IContextual:
... return self._contextual
...(When using zope.component, the__conform__would normally be unnecessary;
however, this package does not depend on zope.component.)>>> def dumpPersistent(obj, catalog, cache):
... if obj._p_jar is None:
... catalog._p_jar.add(obj) # assumes something else places it
... return struct.unpack('<q', obj._p_oid)[0]
...
>>> def loadPersistent(token, catalog, cache):
... return catalog._p_jar.get(struct.pack('<q', token))
...>>> from ZODB.tests.util import DB
>>> db = DB()
>>> conn = db.open()
>>> root = conn.root()
>>> catalog = root['catalog'] = zc.relation.catalog.Catalog(
... dumpPersistent, loadPersistent, family=BTrees.family64)
>>> catalog.addValueIndex(IRelation['subjects'],
... dumpPersistent, loadPersistent, multiple=True, name='subject')
>>> catalog.addValueIndex(IRelation['objects'],
... dumpPersistent, loadPersistent, multiple=True, name='object')
>>> catalog.addValueIndex(IRelation['predicate'], btree=BTrees.family32.OO)
>>> catalog.addValueIndex(IContextual['getContext'],
... dumpPersistent, loadPersistent, name='context')
>>> import transaction
>>> transaction.commit()ThedumpPersistentandloadPersistentis a bit of a toy, as warned
above. Also, while our predicate will be stored as a string, some programmers
may prefer to have a dump in such a case verify that the string has been
explicitly registered in some way, to prevent typos. Obviously, we are not
bothering with this for our example.We make some objects, and then we make some relations with those objects and
index them.>>> joe = root['joe'] = Demo('joe')
>>> sara = root['sara'] = Demo('sara')
>>> jack = root['jack'] = Demo('jack')
>>> ann = root['ann'] = Demo('ann')
>>> doughnuts = root['doughnuts'] = Demo('doughnuts')
>>> coffee = root['coffee'] = Demo('coffee')
>>> muffins = root['muffins'] = Demo('muffins')
>>> cookies = root['cookies'] = Demo('cookies')
>>> newspaper = root['newspaper'] = Demo('newspaper')
>>> corner_store = root['corner_store'] = Demo('corner_store')
>>> bistro = root['bistro'] = Demo('bistro')
>>> bakery = root['bakery'] = Demo('bakery')>>> SELLS = 'SELLS'
>>> BUYS = 'BUYS'
>>> OBSERVES = 'OBSERVES'>>> rel1 = root['rel1'] = Relation((joe,), SELLS, (doughnuts, coffee))
>>> IContextual(rel1).setContext(corner_store)
>>> rel2 = root['rel2'] = Relation((sara, jack), SELLS,
... (muffins, doughnuts, cookies))
>>> IContextual(rel2).setContext(bakery)
>>> rel3 = root['rel3'] = Relation((ann,), BUYS, (doughnuts,))
>>> rel4 = root['rel4'] = Relation((sara,), BUYS, (bistro,))>>> for r in (rel1, rel2, rel3, rel4):
... catalog.index(r)
...Now we can ask a simple question. Where do they sell doughnuts?>>> query = catalog.tokenizeQuery
>>> sorted(catalog.findValues(
... 'context',
... (query(predicate=SELLS, object=doughnuts))),
... key=lambda ob: ob.name)
[<Demo instance 'bakery'>, <Demo instance 'corner_store'>]Hopefully these examples give you further ideas on how you can use this tool.Additional FunctionalityThis section introduces peripheral functionality. We will learn the following.Listeners can be registered in the catalog. They are alerted when a relation
is added, modified, or removed; and when the catalog is cleared and copied
(see below).Theclearmethod clears the relations in the catalog.Thecopymethod makes a copy of the current catalog by copying internal
data structures, rather than reindexing the relations, which can be a
significant optimization opportunity. This copies value indexes and search
indexes; and gives listeners an opportunity to specify what, if anything,
should be included in the new copy.TheignoreSearchIndexargument to the five pertinent search methods
causes the search to ignore search indexes, even if there is an appropriate
one.findRelationTokens()(without arguments) returns the BTree set of all
relation tokens in the catalog.findValueTokens(INDEX_NAME)(where “INDEX_NAME” should be replaced with
an index name) returns the BTree set of all value tokens in the catalog for
the given index name.ListenersA variety of potential clients may want to be alerted when the catalog changes.
zc.relation does not depend on zope.event, so listeners may be registered for
various changes. Let’s make a quick demo listener. Theadditionsandremovalsarguments are dictionaries of {value name: iterable of added or
removed value tokens}.>>> def pchange(d):
... pprint.pprint(dict(
... (k, v is not None and sorted(set(v)) or v) for k, v in d.items()))
>>> @zope.interface.implementer(zc.relation.interfaces.IListener)
... class DemoListener(persistent.Persistent):
...
... def relationAdded(self, token, catalog, additions):
... print('a relation (token %r) was added to %r '
... 'with these values:' % (token, catalog))
... pchange(additions)
... def relationModified(self, token, catalog, additions, removals):
... print('a relation (token %r) in %r was modified '
... 'with these additions:' % (token, catalog))
... pchange(additions)
... print('and these removals:')
... pchange(removals)
... def relationRemoved(self, token, catalog, removals):
... print('a relation (token %r) was removed from %r '
... 'with these values:' % (token, catalog))
... pchange(removals)
... def sourceCleared(self, catalog):
... print('catalog %r had all relations unindexed' % (catalog,))
... def sourceAdded(self, catalog):
... print('now listening to catalog %r' % (catalog,))
... def sourceRemoved(self, catalog):
... print('no longer listening to catalog %r' % (catalog,))
... def sourceCopied(self, original, copy):
... print('catalog %r made a copy %r' % (catalog, copy))
... copy.addListener(self)
...Listeners can be installed multiple times.Listeners can be added as persistent weak references, so that, if they are
deleted elsewhere, a ZODB pack will not consider the reference in the catalog
to be something preventing garbage collection.We’ll install one of these demo listeners into our new catalog as a
normal reference, the default behavior. Then we’ll show some example messages
sent to the demo listener.>>> listener = DemoListener()
>>> catalog.addListener(listener) # doctest: +ELLIPSIS
now listening to catalog <zc.relation.catalog.Catalog object at ...>
>>> rel5 = root['rel5'] = Relation((ann,), OBSERVES, (newspaper,))
>>> catalog.index(rel5) # doctest: +ELLIPSIS
a relation (token ...) was added to <...Catalog...> with these values:
{'context': None,
'object': [...],
'predicate': ['OBSERVES'],
'subject': [...]}
>>> rel5.subjects = (jack,)
>>> IContextual(rel5).setContext(bistro)
>>> catalog.index(rel5) # doctest: +ELLIPSIS
a relation (token ...) in ...Catalog... was modified with these additions:
{'context': [...], 'subject': [...]}
and these removals:
{'subject': [...]}
>>> catalog.unindex(rel5) # doctest: +ELLIPSIS
a relation (token ...) was removed from <...Catalog...> with these values:
{'context': [...],
'object': [...],
'predicate': ['OBSERVES'],
'subject': [...]}>>> catalog.removeListener(listener) # doctest: +ELLIPSIS
no longer listening to catalog <...Catalog...>
>>> catalog.index(rel5) # doctest: +ELLIPSISThe only two methods not shown by those examples aresourceClearedandsourceCopied. We’ll get to those very soon below.TheclearMethodTheclearmethod simply indexes all relations from a catalog. Installed
listeners havesourceClearedcalled.>>> len(catalog)
5>>> catalog.addListener(listener) # doctest: +ELLIPSIS
now listening to catalog <zc.relation.catalog.Catalog object at ...>>>> catalog.clear() # doctest: +ELLIPSIS
catalog <...Catalog...> had all relations unindexed>>> len(catalog)
0
>>> sorted(catalog.findValues(
... 'context',
... (query(predicate=SELLS, object=doughnuts))),
... key=lambda ob: ob.name)
[]ThecopyMethodSometimes you may want to copy a relation catalog. One way of doing this is
to create a new catalog, set it up like the current one, and then reindex
all the same relations. This is unnecessarily slow for programmer and
computer. Thecopymethod makes a new catalog with the same corpus of
indexed relations by copying internal data structures.Search indexes are requested to make new copies of themselves for the new
catalog; and listeners are given an opportunity to react as desired to the new
copy, including installing themselves, and/or another object of their choosing
as a listener.Let’s make a copy of a populated index with a search index and a listener.
Notice in our listener thatsourceCopiedadds itself as a listener to the
new copy. This is done at the very end of thecopyprocess.>>> for r in (rel1, rel2, rel3, rel4, rel5):
... catalog.index(r)
... # doctest: +ELLIPSIS
a relation ... was added...
a relation ... was added...
a relation ... was added...
a relation ... was added...
a relation ... was added...
>>> BEGAT = 'BEGAT'
>>> rel6 = root['rel6'] = Relation((jack, ann), BEGAT, (sara,))
>>> henry = root['henry'] = Demo('henry')
>>> rel7 = root['rel7'] = Relation((sara, joe), BEGAT, (henry,))
>>> catalog.index(rel6) # doctest: +ELLIPSIS
a relation (token ...) was added to <...Catalog...> with these values:
{'context': None,
'object': [...],
'predicate': ['BEGAT'],
'subject': [..., ...]}
>>> catalog.index(rel7) # doctest: +ELLIPSIS
a relation (token ...) was added to <...Catalog...> with these values:
{'context': None,
'object': [...],
'predicate': ['BEGAT'],
'subject': [..., ...]}
>>> catalog.addDefaultQueryFactory(
... zc.relation.queryfactory.TransposingTransitive(
... 'subject', 'object', {'predicate': BEGAT}))
...
>>> list(catalog.findValues(
... 'object', query(subject=jack, predicate=BEGAT)))
[<Demo instance 'sara'>, <Demo instance 'henry'>]
>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... 'subject', 'object', static={'predicate': BEGAT}))
>>> sorted(
... catalog.findValues(
... 'object', query(subject=jack, predicate=BEGAT)),
... key=lambda o: o.name)
[<Demo instance 'henry'>, <Demo instance 'sara'>]>>> newcat = root['newcat'] = catalog.copy() # doctest: +ELLIPSIS
catalog <...Catalog...> made a copy <...Catalog...>
now listening to catalog <...Catalog...>
>>> transaction.commit()Now the copy has its own copies of internal data structures and of the
searchindex. For example, let’s modify the relations and add a new one to the
copy.>>> mary = root['mary'] = Demo('mary')
>>> buffy = root['buffy'] = Demo('buffy')
>>> zack = root['zack'] = Demo('zack')
>>> rel7.objects += (mary,)
>>> rel8 = root['rel8'] = Relation((henry, buffy), BEGAT, (zack,))
>>> newcat.index(rel7) # doctest: +ELLIPSIS
a relation (token ...) in ...Catalog... was modified with these additions:
{'object': [...]}
and these removals:
{}
>>> newcat.index(rel8) # doctest: +ELLIPSIS
a relation (token ...) was added to ...Catalog... with these values:
{'context': None,
'object': [...],
'predicate': ['BEGAT'],
'subject': [..., ...]}
>>> len(newcat)
8
>>> sorted(
... newcat.findValues(
... 'object', query(subject=jack, predicate=BEGAT)),
... key=lambda o: o.name) # doctest: +NORMALIZE_WHITESPACE
[<Demo instance 'henry'>, <Demo instance 'mary'>, <Demo instance 'sara'>,
<Demo instance 'zack'>]
>>> sorted(
... newcat.findValues(
... 'object', query(subject=sara)),
... key=lambda o: o.name) # doctest: +NORMALIZE_WHITESPACE
[<Demo instance 'bistro'>, <Demo instance 'cookies'>,
<Demo instance 'doughnuts'>, <Demo instance 'henry'>,
<Demo instance 'mary'>, <Demo instance 'muffins'>]The original catalog is not modified.>>> len(catalog)
7
>>> sorted(
... catalog.findValues(
... 'object', query(subject=jack, predicate=BEGAT)),
... key=lambda o: o.name)
[<Demo instance 'henry'>, <Demo instance 'sara'>]
>>> sorted(
... catalog.findValues(
... 'object', query(subject=sara)),
... key=lambda o: o.name) # doctest: +NORMALIZE_WHITESPACE
[<Demo instance 'bistro'>, <Demo instance 'cookies'>,
<Demo instance 'doughnuts'>, <Demo instance 'henry'>,
<Demo instance 'muffins'>]TheignoreSearchIndexargumentThe five methods that can use search indexes,findValues,findValueTokens,findRelations,findRelationTokens, andcanFind, can be explicitly requested to ignore any pertinent search index
using theignoreSearchIndexargument.We can see this easily with the token-related methods: the search index result
will be a BTree set, while without the search index the result will be a
generator.>>> res1 = newcat.findValueTokens(
... 'object', query(subject=jack, predicate=BEGAT))
>>> res1 # doctest: +ELLIPSIS
LFSet([..., ..., ..., ...])
>>> res2 = newcat.findValueTokens(
... 'object', query(subject=jack, predicate=BEGAT),
... ignoreSearchIndex=True)
>>> res2 # doctest: +ELLIPSIS
<generator object ... at 0x...>
>>> sorted(res2) == list(res1)
True>>> res1 = newcat.findRelationTokens(
... query(subject=jack, predicate=BEGAT))
>>> res1 # doctest: +ELLIPSIS
LFSet([..., ..., ...])
>>> res2 = newcat.findRelationTokens(
... query(subject=jack, predicate=BEGAT), ignoreSearchIndex=True)
>>> res2 # doctest: +ELLIPSIS
<generator object ... at 0x...>
>>> sorted(res2) == list(res1)
TrueWe can see that the other methods take the argument, but the results look the
same as usual.>>> res = newcat.findValues(
... 'object', query(subject=jack, predicate=BEGAT),
... ignoreSearchIndex=True)
>>> res # doctest: +ELLIPSIS
<generator object ... at 0x...>
>>> list(res) == list(newcat.resolveValueTokens(newcat.findValueTokens(
... 'object', query(subject=jack, predicate=BEGAT),
... ignoreSearchIndex=True), 'object'))
True>>> res = newcat.findRelations(
... query(subject=jack, predicate=BEGAT),
... ignoreSearchIndex=True)
>>> res # doctest: +ELLIPSIS
<generator object ... at 0x...>
>>> list(res) == list(newcat.resolveRelationTokens(
... newcat.findRelationTokens(
... query(subject=jack, predicate=BEGAT),
... ignoreSearchIndex=True)))
True>>> newcat.canFind(
... query(subject=jack, predicate=BEGAT), ignoreSearchIndex=True)
TruefindRelationTokens()If you callfindRelationTokenswithout any arguments, you will get the
BTree set of all relation tokens in the catalog. This can be handy for tests
and for advanced uses of the catalog.>>> newcat.findRelationTokens() # doctest: +ELLIPSIS
<BTrees.LFBTree.LFTreeSet object at ...>
>>> len(newcat.findRelationTokens())
8
>>> set(newcat.resolveRelationTokens(newcat.findRelationTokens())) == set(
... (rel1, rel2, rel3, rel4, rel5, rel6, rel7, rel8))
TruefindValueTokens(INDEX_NAME)If you callfindValueTokenswith only an index name, you will get the BTree
structure of all tokens for that value in the index. This can be handy for
tests and for advanced uses of the catalog.>>> newcat.findValueTokens('predicate') # doctest: +ELLIPSIS
<BTrees.OOBTree.OOBTree object at ...>
>>> list(newcat.findValueTokens('predicate'))
['BEGAT', 'BUYS', 'OBSERVES', 'SELLS']ConclusionReviewThat brings us to the end of our introductory examples. Let’s review, and
then look at where you can go from here.Relations are objects with indexed values.The relation catalog indexes relations. The relations can be one-way,
two-way, three-way, or N-way, as long as you tell the catalog to index the
different values.Creating a catalog:Relations and their values are stored in the catalog as tokens: unique
identifiers that you can resolve back to the original value. Integers are
the most efficient tokens, but others can work fine too.Token type determines the BTree module needed.If the tokens are 32-bit ints, chooseBTrees.family32.II,BTrees.family32.IForBTrees.family32.IO.If the tokens are 64 bit ints, chooseBTrees.family64.II,BTrees.family64.IForBTrees.family64.IO.If they are anything else, chooseBTrees.family32.OI,BTrees.family64.OI, orBTrees.family32.OO(or
BTrees.family64.OO–they are the same).Within these rules, the choice is somewhat arbitrary unless you plan to
merge these results with that of another source that is using a
particular BTree module. BTree set operations only work within the same
module, so you must match module to module.Thefamilyargument in instantiating the catalog lets you change the
default btree family for relations and value indexes fromBTrees.family32.IFtoBTrees.family64.IF.You must define your own functions for tokenizing and resolving tokens.
These functions are registered with the catalog for the relations and for
each of their value indexes.You add value indexes to relation catalogs to be able to search. Values
can be identified to the catalog with callables or interface elements.Using interface attributes will cause an attempt to adapt the
relation if it does not already provide the interface.We can use themultipleargument when defining a value index to
indicate that the indexed value is a collection. This defaults to
False.We can use thenameargument when defining a value index to
specify the name to be used in queries, rather than relying on the
name of the interface attribute or callable.You can set up search indexes to speed up specific searches, usually
transitive.Listeners can be registered in the catalog. They are alerted when a
relation is added, modified, or removed; and when the catalog is cleared
and copied.Catalog Management:Relations are indexed withindex(relation), and removed from the
catalog withunindex(relation).index_doc(relation_token, relation)andunindex_doc(relation_token)also work.Theclearmethod clears the relations in the catalog.Thecopymethod makes a copy of the current catalog by copying
internal data structures, rather than reindexing the relations, which can
be a significant optimization opportunity. This copies value indexes and
search indexes; and gives listeners an opportunity to specify what, if
anything, should be included in the new copy.Searching a catalog:Queries to the relation catalog are formed with dicts.Query keys are the names of the indexes you want to search, or, for the
special case of precise relations, thezc.relation.RELATIONconstant.Query values are the tokens of the results you want to match; orNone, indicating relations that haveNoneas a value (or an empty
collection, if it is a multiple). Search values can usezc.relation.catalog.any(args)orzc.relation.catalog.Any(args)to
specify multiple (non-None) results to match for a given key.The index has a variety of methods to help you work with tokens.tokenizeQueryis typically the most used, though others are
available.To find relations that match a query, usefindRelationsorfindRelationTokens. CallingfindRelationTokenswithout any
arguments returns the BTree set of all relation tokens in the catalog.To find values that match a query, usefindValuesorfindValueTokens. CallingfindValueTokenswith only the name
of a value index returns the BTree set of all tokens in the catalog for
that value index.You search transitively by using a query factory. Thezc.relation.queryfactory.TransposingTransitiveis a good common case
factory that lets you walk up and down a hierarchy. A query factory can
be passed in as an argument to search methods as aqueryFactory, or
installed as a default behavior usingaddDefaultQueryFactory.To find how a query is related, usefindRelationChainsorfindRelationTokenChains.To find out if a query is related, usecanFind.Circular transitive relations are handled to prevent infinite loops. They
are identified infindRelationChainsandfindRelationTokenChainswith azc.relation.interfaces.ICircularRelationPathmarker interface.search methods share the following arguments:maxDepth, limiting the transitive depth for searches;filter, allowing code to filter transitive paths;targetQuery, allowing a query to filter transitive paths on the
basis of the endpoint;targetFilter, allowing code to filter transitive paths on the basis
of the endpoint; andqueryFactory, mentioned above.In addition, theignoreSearchIndexargument tofindRelations,findRelationTokens,findValues,findValueTokens, andcanFindcauses the search to ignore search indexes, even if there is
an appropriate one.Next StepsIf you want to read more, next steps depend on how you like to learn. Here
are some of the other documents in the zc.relation package.optimization.rst:Best practices for optimizing your use of the relation catalog.searchindex.rst:Queries factories and search indexes: from basics to nitty gritty details.tokens.rst:This document explores the details of tokens. All God’s chillun
love tokens, at least if God’s chillun are writing non-toy apps
using zc.relation. It includes discussion of the token helpers that
the catalog provides, how to use zope.app.intid-like registries with
zc.relation, how to use tokens to “join” query results reasonably
efficiently, and how to index joins. It also is unnecessarily
mind-blowing because of the examples used.interfaces.py:The contract, for nuts and bolts.Finally, the truly die-hard might also be interested in the timeit
directory, which holds scripts used to test assumptions and learn.[13]OK, you care about how that query factory worked, so
we will look into it a bit. Let’s talk through two steps of the
transitive search in the second question. The catalog initially
performs the initial intransitive search requested: find relations
for which Betty is the supervisor. That’s Diane and Edgar.Now, for each of the results, the catalog asks the query factory for
next steps. Let’s take Diane. The catalog says to the factory,
“Given this query for relations where Betty is supervisor, I got
this result of Diane. Do you have any other queries I should try to
look further?”. The factory also gets the catalog instance so it
can use it to answer the question if it needs to.OK, the next part is where your brain hurts. Hang on.In our case, the factory sees that the query was for supervisor. Its
other key, the one it transposes with, iszc.relation.RELATION.The
factory gets the transposing key’s result for the current token.So, for
us, a key ofzc.relation.RELATIONis actually a no-op: the resultisthe current token, Diane. Then, the factory has its answer: replace the old
value of supervisor in the query, Betty, with the result, Diane. The next
transitive query should be {‘supervisor’, ‘Diane’}. Ta-da.Tokens and Joins: zc.relation Catalog Extended ExampleIntroduction and Set UpThis document assumes you have read the introductory README.rst and want
to learn a bit more by example. In it, we will explore a more
complicated set of relations that demonstrates most of the aspects of
working with tokens. In particular, we will look at joins, which will
also give us a chance to look more in depth at query factories and
search indexes, and introduce the idea of listeners. It will not explain
the basics that the README already addressed.Imagine we are indexing security assertions in a system. In this
system, users may have roles within an organization. Each organization
may have multiple child organizations and may have a single parent
organization. A user with a role in a parent organization will have the
same role in all transitively connected child relations.We have two kinds of relations, then. One kind of relation will model
the hierarchy of organizations. We’ll do it with an intrinsic relation
of organizations to their children: that reflects the fact that parent
organizations choose and are comprised of their children; children do
not choose their parents.The other relation will model the (multiple) roles a (single) user has
in a (single) organization. This relation will be entirely extrinsic.We could create two catalogs, one for each type. Or we could put them
both in the same catalog. Initially, we’ll go with the single-catalog
approach for our examples. This single catalog, then, will be indexing
a heterogeneous collection of relations.Let’s define the two relations with interfaces. We’ll include one
accessor, getOrganization, largely to show how to handle methods.>>> import zope.interface
>>> class IOrganization(zope.interface.Interface):
... title = zope.interface.Attribute('the title')
... parts = zope.interface.Attribute(
... 'the organizations that make up this one')
...
>>> class IRoles(zope.interface.Interface):
... def getOrganization():
... 'return the organization in which this relation operates'
... principal_id = zope.interface.Attribute(
... 'the pricipal id whose roles this relation lists')
... role_ids = zope.interface.Attribute(
... 'the role ids that the principal explicitly has in the '
... 'organization. The principal may have other roles via '
... 'roles in parent organizations.')
...Now we can create some classes. In the README example, the setup was a bit
of a toy. This time we will be just a bit more practical. We’ll also expect
to be operating within the ZODB, with a root and transactions.[14][14]Here we will set up a ZODB instance for us to use.>>> from ZODB.tests.util import DB
>>> db = DB()
>>> conn = db.open()
>>> root = conn.root()Here’s how we will dump and load our relations: use a “registry”
object, similar to an intid utility.[15][15]Here’s a simple persistent keyreference. Notice that it is
not persistent itself: this is important for conflict resolution to be
able to work (which we don’t show here, but we’re trying to lean more
towards real usage for this example).>>> from functools import total_ordering
>>> @total_ordering
... class Reference(object): # see zope.app.keyreference
... def __init__(self, obj):
... self.object = obj
... def _get_sorting_key(self):
... # this doesn't work during conflict resolution. See
... # zope.app.keyreference.persistent, 3.5 release, for current
... # best practice.
... if self.object._p_jar is None:
... raise ValueError(
... 'can only compare when both objects have connections')
... return self.object._p_oid or ''
... def __lt__(self, other):
... # this doesn't work during conflict resolution. See
... # zope.app.keyreference.persistent, 3.5 release, for current
... # best practice.
... if not isinstance(other, Reference):
... raise ValueError('can only compare with Reference objects')
... return self._get_sorting_key() < other._get_sorting_key()
... def __eq__(self, other):
... # this doesn't work during conflict resolution. See
... # zope.app.keyreference.persistent, 3.5 release, for current
... # best practice.
... if not isinstance(other, Reference):
... raise ValueError('can only compare with Reference objects')
... return self._get_sorting_key() == other._get_sorting_key()Here’s a simple integer identifier tool.>>> import persistent
>>> import BTrees
>>> class Registry(persistent.Persistent): # see zope.app.intid
... def __init__(self, family=BTrees.family32):
... self.family = family
... self.ids = self.family.IO.BTree()
... self.refs = self.family.OI.BTree()
... def getId(self, obj):
... if not isinstance(obj, persistent.Persistent):
... raise ValueError('not a persistent object', obj)
... if obj._p_jar is None:
... self._p_jar.add(obj)
... ref = Reference(obj)
... id = self.refs.get(ref)
... if id is None:
... # naive for conflict resolution; see zope.app.intid
... if self.ids:
... id = self.ids.maxKey() + 1
... else:
... id = self.family.minint
... self.ids[id] = ref
... self.refs[ref] = id
... return id
... def __contains__(self, obj):
... if (not isinstance(obj, persistent.Persistent) or
... obj._p_oid is None):
... return False
... return Reference(obj) in self.refs
... def getObject(self, id, default=None):
... res = self.ids.get(id, None)
... if res is None:
... return default
... else:
... return res.object
... def remove(self, r):
... if isinstance(r, int):
... self.refs.pop(self.ids.pop(r))
... elif (not isinstance(r, persistent.Persistent) or
... r._p_oid is None):
... raise LookupError(r)
... else:
... self.ids.pop(self.refs.pop(Reference(r)))
...
>>> registry = root['registry'] = Registry()>>> import transaction
>>> transaction.commit()In this implementation of the “dump” method, we use the cache just to
show you how you might use it. It probably is overkill for this job,
and maybe even a speed loss, but you can see the idea.>>> def dump(obj, catalog, cache):
... reg = cache.get('registry')
... if reg is None:
... reg = cache['registry'] = catalog._p_jar.root()['registry']
... return reg.getId(obj)
...
>>> def load(token, catalog, cache):
... reg = cache.get('registry')
... if reg is None:
... reg = cache['registry'] = catalog._p_jar.root()['registry']
... return reg.getObject(token)
...Now we can create a relation catalog to hold these items.>>> import zc.relation.catalog
>>> catalog = root['catalog'] = zc.relation.catalog.Catalog(dump, load)
>>> transaction.commit()Now we set up our indexes. We’ll start with just the organizations, and
set up the catalog with them. This part will be similar to the example
in README.rst, but will introduce more discussions of optimizations and
tokens. Then we’ll add in the part about roles, and explore queries and
token-based “joins”.OrganizationsThe organization will hold a set of organizations. This is actually not
inherently easy in the ZODB because this means that we need to compare
or hash persistent objects, which does not work reliably over time and
across machines out-of-the-box. To side-step the issue for this example,
and still do something a bit interesting and real-world, we’ll use the
registry tokens introduced above. This will also give us a chance to
talk a bit more about optimizations and tokens. (If you would like
to sanely and transparently hold a set of persistent objects, try the
zc.set package XXX not yet.)>>> import BTrees
>>> import persistent
>>> @zope.interface.implementer(IOrganization)
... @total_ordering
... class Organization(persistent.Persistent):
...
... def __init__(self, title):
... self.title = title
... self.parts = BTrees.family32.IF.TreeSet()
... # the next parts just make the tests prettier
... def __repr__(self):
... return '<Organization instance "' + self.title + '">'
... def __lt__(self, other):
... # pukes if other doesn't have name
... return self.title < other.title
... def __eq__(self, other):
... return self is other
... def __hash__(self):
... return 1 # dummy
...OK, now we know how organizations will work. Now we can add thepartsindex to the catalog. This will do a few new things from how we added
indexes in the README.>>> catalog.addValueIndex(IOrganization['parts'], multiple=True,
... name="part")So, what’s different from the README examples?First, we are using an interface element to define the value to be indexed.
It provides an interface to which objects will be adapted, a default name
for the index, and information as to whether the attribute should be used
directly or called.Second, we are not specifying a dump or load. They are None. This
means that the indexed value can already be treated as a token. This
can allow a very significant optimization for reindexing if the indexed
value is a large collection using the same BTree family as the
index–which leads us to the next difference.Third, we are specifying thatmultiple=True. This means that the value
on a given relation that provides or can be adapted to IOrganization will
have a collection ofparts. These will always be regarded as a set,
whether the actual colection is a BTrees set or the keys of a BTree.Last, we are specifying a name to be used for queries. I find that queries
read more easily when the query keys are singular, so I often rename plurals.As in the README, We can add another simple transposing transitive query
factory, switching between ‘part’ andNone.>>> import zc.relation.queryfactory
>>> factory1 = zc.relation.queryfactory.TransposingTransitive(
... 'part', None)
>>> catalog.addDefaultQueryFactory(factory1)Let’s add a couple of search indexes in too, of the hierarchy looking up…>>> import zc.relation.searchindex
>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... 'part', None))…and down.>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... None, 'part'))PLEASE NOTE: the search index looking up is not a good idea practically. The
index is designed for looking down[16].[16]The TransposingTransitiveMembership indexes
provide ISearchIndex.>>> from zope.interface.verify import verifyObject
>>> import zc.relation.interfaces
>>> index = list(catalog.iterSearchIndexes())[0]
>>> verifyObject(zc.relation.interfaces.ISearchIndex, index)
TrueLet’s create and add a few organizations.We’ll make a structure like this[24]:Ynod Corp Mangement Zookd Corp Management
/ | \ / | \
Ynod Devs Ynod SAs Ynod Admins Zookd Admins Zookd SAs Zookd Devs
/ \ \ / / \
Y3L4 Proj Bet Proj Ynod Zookd Task Force Zookd hOgnmd Zookd NbdHere’s the Python.>>> orgs = root['organizations'] = BTrees.family32.OO.BTree()
>>> for nm, parts in (
... ('Y3L4 Proj', ()),
... ('Bet Proj', ()),
... ('Ynod Zookd Task Force', ()),
... ('Zookd hOgnmd', ()),
... ('Zookd Nbd', ()),
... ('Ynod Devs', ('Y3L4 Proj', 'Bet Proj')),
... ('Ynod SAs', ()),
... ('Ynod Admins', ('Ynod Zookd Task Force',)),
... ('Zookd Admins', ('Ynod Zookd Task Force',)),
... ('Zookd SAs', ()),
... ('Zookd Devs', ('Zookd hOgnmd', 'Zookd Nbd')),
... ('Ynod Corp Management', ('Ynod Devs', 'Ynod SAs', 'Ynod Admins')),
... ('Zookd Corp Management', ('Zookd Devs', 'Zookd SAs',
... 'Zookd Admins'))):
... org = Organization(nm)
... for part in parts:
... ignore = org.parts.insert(registry.getId(orgs[part]))
... orgs[nm] = org
... catalog.index(org)
...Now the catalog knows about the relations.>>> len(catalog)
13
>>> root['dummy'] = Organization('Foo')
>>> root['dummy'] in catalog
False
>>> orgs['Y3L4 Proj'] in catalog
TrueAlso, now we can search. To do this, we can use some of the token methods that
the catalog provides. The most commonly used istokenizeQuery. It takes a
query with values that are not tokenized and converts them to values that are
tokenized.>>> Ynod_SAs_id = registry.getId(orgs['Ynod SAs'])
>>> catalog.tokenizeQuery({None: orgs['Ynod SAs']}) == {
... None: Ynod_SAs_id}
True
>>> Zookd_SAs_id = registry.getId(orgs['Zookd SAs'])
>>> Zookd_Devs_id = registry.getId(orgs['Zookd Devs'])
>>> catalog.tokenizeQuery(
... {None: zc.relation.catalog.any(
... orgs['Zookd SAs'], orgs['Zookd Devs'])}) == {
... None: zc.relation.catalog.any(Zookd_SAs_id, Zookd_Devs_id)}
TrueOf course, right now doing this with ‘part’ alone is kind of silly, since it
does not change within the relation catalog (because we said that dump and
load wereNone, as discussed above).>>> catalog.tokenizeQuery({'part': Ynod_SAs_id}) == {
... 'part': Ynod_SAs_id}
True
>>> catalog.tokenizeQuery(
... {'part': zc.relation.catalog.any(Zookd_SAs_id, Zookd_Devs_id)}
... ) == {'part': zc.relation.catalog.any(Zookd_SAs_id, Zookd_Devs_id)}
TrueThetokenizeQuerymethod is so common that we’re going to assign it to
a variable in our example. Then we’ll do a search or two.So…find the relations that Ynod Devs supervise.>>> t = catalog.tokenizeQuery
>>> res = list(catalog.findRelationTokens(t({None: orgs['Ynod Devs']})))OK…we usedfindRelationTokens, as opposed tofindRelations, so res
is a couple of numbers now. How do we convert them back?resolveRelationTokenswill do the trick.>>> len(res)
3
>>> sorted(catalog.resolveRelationTokens(res))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Bet Proj">, <Organization instance "Y3L4 Proj">,
<Organization instance "Ynod Devs">]resolveQueryis the mirror image oftokenizeQuery: it converts
tokenized queries to queries with “loaded” values.>>> original = {'part': zc.relation.catalog.any(
... Zookd_SAs_id, Zookd_Devs_id),
... None: orgs['Zookd Devs']}
>>> tokenized = catalog.tokenizeQuery(original)
>>> original == catalog.resolveQuery(tokenized)
True>>> original = {None: zc.relation.catalog.any(
... orgs['Zookd SAs'], orgs['Zookd Devs']),
... 'part': Zookd_Devs_id}
>>> tokenized = catalog.tokenizeQuery(original)
>>> original == catalog.resolveQuery(tokenized)
TrueLikewise,tokenizeRelationsis the mirror image ofresolveRelationTokens.>>> sorted(catalog.tokenizeRelations(
... [orgs["Bet Proj"], orgs["Y3L4 Proj"]])) == sorted(
... registry.getId(o) for o in
... [orgs["Bet Proj"], orgs["Y3L4 Proj"]])
TrueThe other token-related methods are as follows[17]:[17]For what it’s worth, here are some small
examples of the remaining token-related methods.These two are the singular versions oftokenizeRelationsandresolveRelationTokens.tokenizeRelationreturns a token for the given relation.>>> catalog.tokenizeRelation(orgs['Zookd Corp Management']) == (
... registry.getId(orgs['Zookd Corp Management']))
TrueresolveRelationTokenreturns a relation for the given token.>>> catalog.resolveRelationToken(registry.getId(
... orgs['Zookd Corp Management'])) is orgs['Zookd Corp Management']
TrueThe “values” ones are a bit lame to show now, since the only value
we have right now is not tokenized but used straight up. But here
goes, showing some fascinating no-ops.tokenizeValues, returns an iterable of tokens for the values of
the given index name.>>> list(catalog.tokenizeValues((1,2,3), 'part'))
[1, 2, 3]resolveValueTokensreturns an iterable of values for the tokens of
the given index name.>>> list(catalog.resolveValueTokens((1,2,3), 'part'))
[1, 2, 3]tokenizeValues, which returns an iterable of tokens for the values
of the given index name;resolveValueTokens, which returns an iterable of values for the tokens of
the given index name;tokenizeRelation, which returns a token for the given relation; andresolveRelationToken, which returns a relation for the given token.Why do we bother with these tokens, instead of hiding them away and
making the API prettier? By exposing them, we enable efficient joining,
and efficient use in other contexts. For instance, if you use the same
intid utility to tokenize in other catalogs, our results can be merged
with the results of other catalogs. Similarly, you can use the results
of queries to other catalogs–or even “joins” from earlier results of
querying this catalog–as query values here. We’ll explore this in the
next section.RolesWe have set up the Organization relations. Now let’s set up the roles, and
actually be able to answer the questions that we described at the beginning
of the document.In our Roles object, roles and principals will simply be strings–ids, if
this were a real system. The organization will be a direct object reference.>>> @zope.interface.implementer(IRoles)
... @total_ordering
... class Roles(persistent.Persistent):
...
... def __init__(self, principal_id, role_ids, organization):
... self.principal_id = principal_id
... self.role_ids = BTrees.family32.OI.TreeSet(role_ids)
... self._organization = organization
... def getOrganization(self):
... return self._organization
... # the rest is for prettier/easier tests
... def __repr__(self):
... return "<Roles instance (%s has %s in %s)>" % (
... self.principal_id, ', '.join(self.role_ids),
... self._organization.title)
... def __lt__(self, other):
... _self = (
... self.principal_id,
... tuple(self.role_ids),
... self._organization.title,
... )
... _other = (
... other.principal_id,
... tuple(other.role_ids),
... other._organization.title,
... )
... return _self <_other
... def __eq__(self, other):
... return self is other
... def __hash__(self):
... return 1 # dummy
...Now let’s add add the value indexes to the relation catalog.>>> catalog.addValueIndex(IRoles['principal_id'], btree=BTrees.family32.OI)
>>> catalog.addValueIndex(IRoles['role_ids'], btree=BTrees.family32.OI,
... multiple=True, name='role_id')
>>> catalog.addValueIndex(IRoles['getOrganization'], dump, load,
... name='organization')Those are some slightly new variations of what we’ve seen inaddValueIndexbefore, but all mixing and matching on the same ingredients.As a reminder, here is our organization structure:Ynod Corp Mangement Zookd Corp Management
/ | \ / | \
Ynod Devs Ynod SAs Ynod Admins Zookd Admins Zookd SAs Zookd Devs
/ \ \ / / \
Y3L4 Proj Bet Proj Ynod Zookd Task Force Zookd hOgnmd Zookd NbdNow let’s create and add some roles.>>> principal_ids = [
... 'abe', 'bran', 'cathy', 'david', 'edgar', 'frank', 'gertrude',
... 'harriet', 'ignas', 'jacob', 'karyn', 'lettie', 'molly', 'nancy',
... 'ophelia', 'pat']
>>> role_ids = ['user manager', 'writer', 'reviewer', 'publisher']
>>> get_role = dict((v[0], v) for v in role_ids).__getitem__
>>> roles = root['roles'] = BTrees.family32.IO.BTree()
>>> next = 0
>>> for prin, org, role_ids in (
... ('abe', orgs['Zookd Corp Management'], 'uwrp'),
... ('bran', orgs['Ynod Corp Management'], 'uwrp'),
... ('cathy', orgs['Ynod Devs'], 'w'),
... ('cathy', orgs['Y3L4 Proj'], 'r'),
... ('david', orgs['Bet Proj'], 'wrp'),
... ('edgar', orgs['Ynod Devs'], 'up'),
... ('frank', orgs['Ynod SAs'], 'uwrp'),
... ('frank', orgs['Ynod Admins'], 'w'),
... ('gertrude', orgs['Ynod Zookd Task Force'], 'uwrp'),
... ('harriet', orgs['Ynod Zookd Task Force'], 'w'),
... ('harriet', orgs['Ynod Admins'], 'r'),
... ('ignas', orgs['Zookd Admins'], 'r'),
... ('ignas', orgs['Zookd Corp Management'], 'w'),
... ('karyn', orgs['Zookd Corp Management'], 'uwrp'),
... ('karyn', orgs['Ynod Corp Management'], 'uwrp'),
... ('lettie', orgs['Zookd Corp Management'], 'u'),
... ('lettie', orgs['Ynod Zookd Task Force'], 'w'),
... ('lettie', orgs['Zookd SAs'], 'w'),
... ('molly', orgs['Zookd SAs'], 'uwrp'),
... ('nancy', orgs['Zookd Devs'], 'wrp'),
... ('nancy', orgs['Zookd hOgnmd'], 'u'),
... ('ophelia', orgs['Zookd Corp Management'], 'w'),
... ('ophelia', orgs['Zookd Devs'], 'r'),
... ('ophelia', orgs['Zookd Nbd'], 'p'),
... ('pat', orgs['Zookd Nbd'], 'wrp')):
... assert prin in principal_ids
... role_ids = [get_role(l) for l in role_ids]
... role = roles[next] = Roles(prin, role_ids, org)
... role.key = next
... next += 1
... catalog.index(role)
...Now we can begin to do searches[18].[18]We can also show the values token methods more
sanely now.>>> original = sorted((orgs['Zookd Devs'], orgs['Ynod SAs']))
>>> tokens = list(catalog.tokenizeValues(original, 'organization'))
>>> original == sorted(catalog.resolveValueTokens(tokens, 'organization'))
TrueWhat are all the role settings for ophelia?>>> sorted(catalog.findRelations({'principal_id': 'ophelia'}))
... # doctest: +NORMALIZE_WHITESPACE
[<Roles instance (ophelia has publisher in Zookd Nbd)>,
<Roles instance (ophelia has reviewer in Zookd Devs)>,
<Roles instance (ophelia has writer in Zookd Corp Management)>]That answer does not need to be transitive: we’re done.Next question. Where does ophelia have the ‘writer’ role?>>> list(catalog.findValues(
... 'organization', {'principal_id': 'ophelia',
... 'role_id': 'writer'}))
[<Organization instance "Zookd Corp Management">]Well, that’s correct intransitively. Do we need a transitive queries
factory? No! This is a great chance to look at the token join we talked
about in the previous section. This should actually be a two-step
operation: find all of the organizations in which ophelia has writer,
and then find all of the transitive parts to that organization.>>> sorted(catalog.findRelations({None: zc.relation.catalog.Any(
... catalog.findValueTokens('organization',
... {'principal_id': 'ophelia',
... 'role_id': 'writer'}))}))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Ynod Zookd Task Force">,
<Organization instance "Zookd Admins">,
<Organization instance "Zookd Corp Management">,
<Organization instance "Zookd Devs">,
<Organization instance "Zookd Nbd">,
<Organization instance "Zookd SAs">,
<Organization instance "Zookd hOgnmd">]That’s more like it.Next question. What users have roles in the ‘Zookd Devs’ organization?
Intransitively, that’s pretty easy.>>> sorted(catalog.findValueTokens(
... 'principal_id', t({'organization': orgs['Zookd Devs']})))
['nancy', 'ophelia']Transitively, we should do another join.>>> org_id = registry.getId(orgs['Zookd Devs'])
>>> sorted(catalog.findValueTokens(
... 'principal_id', {
... 'organization': zc.relation.catalog.any(
... org_id, *catalog.findRelationTokens({'part': org_id}))}))
['abe', 'ignas', 'karyn', 'lettie', 'nancy', 'ophelia']That’s a little awkward, but it does the trick.Last question, and the kind of question that started the entire example.What roles does ophelia have in the “Zookd Nbd” organization?>>> list(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'})))
['publisher']Intransitively, that’s correct. But, transitively, ophelia also has
reviewer and writer, and that’s the answer we want to be able to get quickly.We could ask the question a different way, then, again leveraging a join.
We’ll set it up as a function, because we will want to use it a little later
without repeating the code.>>> def getRolesInOrganization(principal_id, org):
... org_id = registry.getId(org)
... return sorted(catalog.findValueTokens(
... 'role_id', {
... 'organization': zc.relation.catalog.any(
... org_id,
... *catalog.findRelationTokens({'part': org_id})),
... 'principal_id': principal_id}))
...
>>> getRolesInOrganization('ophelia', orgs['Zookd Nbd'])
['publisher', 'reviewer', 'writer']As you can see, then, working with tokens makes interesting joins possible,
as long as the tokens are the same across the two queries.We have examined tokens methods and token techniques like joins. The example
story we have told can let us get into a few more advanced topics, such as
query factory joins and search indexes that can increase their read speed.Query Factory JoinsWe can build a query factory that makes the join automatic. A query
factory is a callable that takes two arguments: a query (the one that
starts the search) and the catalog. The factory either returns None,
indicating that the query factory cannot be used for this query, or it
returns another callable that takes a chain of relations. The last
token in the relation chain is the most recent. The output of this
inner callable is expected to be an iterable of
BTrees.family32.OO.Bucket queries to search further from the given chain
of relations.Here’s a flawed approach to this problem.>>> def flawed_factory(query, catalog):
... if (len(query) == 2 and
... 'organization' in query and
... 'principal_id' in query):
... def getQueries(relchain):
... if not relchain:
... yield query
... return
... current = catalog.getValueTokens(
... 'organization', relchain[-1])
... if current:
... organizations = catalog.getRelationTokens(
... {'part': zc.relation.catalog.Any(current)})
... if organizations:
... res = BTrees.family32.OO.Bucket(query)
... res['organization'] = zc.relation.catalog.Any(
... organizations)
... yield res
... return getQueries
...That works for our current example.>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'}),
... queryFactory=flawed_factory))
['publisher', 'reviewer', 'writer']However, it won’t work for other similar queries.>>> getRolesInOrganization('abe', orgs['Zookd Nbd'])
['publisher', 'reviewer', 'user manager', 'writer']
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}),
... queryFactory=flawed_factory))
[]oops.The flawed_factory is actually a useful pattern for more typical relation
traversal. It goes from relation to relation to relation, and ophelia has
connected relations all the way to the top. However, abe only has them at
the top, so nothing is traversed.Instead, we can make a query factory that modifies the initial query.>>> def factory2(query, catalog):
... if (len(query) == 2 and
... 'organization' in query and
... 'principal_id' in query):
... def getQueries(relchain):
... if not relchain:
... res = BTrees.family32.OO.Bucket(query)
... org_id = query['organization']
... if org_id is not None:
... res['organization'] = zc.relation.catalog.any(
... org_id,
... *catalog.findRelationTokens({'part': org_id}))
... yield res
... return getQueries
...>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'}),
... queryFactory=factory2))
['publisher', 'reviewer', 'writer']>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}),
... queryFactory=factory2))
['publisher', 'reviewer', 'user manager', 'writer']A difference between this and the other approach is that it is essentially
intransitive: this query factory modifies the initial query, and then does
not give further queries. The catalog currently always stops calling the
query factory if the queries do not return any results, so an approach like
the flawed_factory simply won’t work for this kind of problem.We could add this query factory as another default.>>> catalog.addDefaultQueryFactory(factory2)>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'})))
['publisher', 'reviewer', 'writer']>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'})))
['publisher', 'reviewer', 'user manager', 'writer']The previously installed query factory is still available.>>> list(catalog.iterDefaultQueryFactories()) == [factory1, factory2]
True>>> list(catalog.findRelations(
... {'part': registry.getId(orgs['Y3L4 Proj'])}))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Ynod Devs">,
<Organization instance "Ynod Corp Management">]>>> sorted(catalog.findRelations(
... {None: registry.getId(orgs['Ynod Corp Management'])}))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Bet Proj">, <Organization instance "Y3L4 Proj">,
<Organization instance "Ynod Admins">,
<Organization instance "Ynod Corp Management">,
<Organization instance "Ynod Devs">, <Organization instance "Ynod SAs">,
<Organization instance "Ynod Zookd Task Force">]Search Index for Query Factory JoinsNow that we have written a query factory that encapsulates the join, we can
use a search index that speeds it up. We’ve only used transitive search
indexes so far. Now we will add an intransitive search index.The intransitive search index generally just needs the search value
names it should be indexing, optionally the result name (defaulting to
relations), and optionally the query factory to be used.We need to use two additional options because of the odd join trick we’re
doing. We need to specify what organization and principal_id values need
to be changed when an object is indexed, and we need to indicate that we
should update when organization, principal_id,orparts changes.getValueTokensspecifies the values that need to be indexed. It gets
the index, the name for the tokens desired, the token, the catalog that
generated the token change (it may not be the same as the index’s
catalog, the source dictionary that contains a dictionary of the values
that will be used for tokens if you do not override them, a dict of the
added values for this token (keys are value names), a dict of the
removed values for this token, and whether the token has been removed.
The method can return None, which will leave the index to its default
behavior that should work if no query factory is used; or an iterable of
values.>>> def getValueTokens(index, name, token, catalog, source,
... additions, removals, removed):
... if name == 'organization':
... orgs = source.get('organization')
... if not removed or not orgs:
... orgs = index.catalog.getValueTokens(
... 'organization', token)
... if not orgs:
... orgs = [token]
... orgs.extend(removals.get('part', ()))
... orgs = set(orgs)
... orgs.update(index.catalog.findValueTokens(
... 'part',
... {None: zc.relation.catalog.Any(
... t for t in orgs if t is not None)}))
... return orgs
... elif name == 'principal_id':
... # we only want custom behavior if this is an organization
... if 'principal_id' in source or index.catalog.getValueTokens(
... 'principal_id', token):
... return ''
... orgs = set((token,))
... orgs.update(index.catalog.findRelationTokens(
... {'part': token}))
... return set(index.catalog.findValueTokens(
... 'principal_id', {
... 'organization': zc.relation.catalog.Any(orgs)}))
...>>> index = zc.relation.searchindex.Intransitive(
... ('organization', 'principal_id'), 'role_id', factory2,
... getValueTokens,
... ('organization', 'principal_id', 'part', 'role_id'),
... unlimitedDepth=True)
>>> catalog.addSearchIndex(index)>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'}))
>>> list(res)
['publisher', 'reviewer', 'writer']
>>> list(res)
['publisher', 'reviewer', 'writer']>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer'][19][19]The Intransitive search index provides
ISearchIndex and IListener.>>> from zope.interface.verify import verifyObject
>>> import zc.relation.interfaces
>>> verifyObject(zc.relation.interfaces.ISearchIndex, index)
True
>>> verifyObject(zc.relation.interfaces.IListener, index)
TrueNow we can change and remove relations–both organizations and roles–and
have the index maintain correct state. Given the current state of
organizations–Ynod Corp Mangement Zookd Corp Management
/ | \ / | \
Ynod Devs Ynod SAs Ynod Admins Zookd Admins Zookd SAs Zookd Devs
/ \ \ / / \
Y3L4 Proj Bet Proj Ynod Zookd Task Force Zookd hOgnmd Zookd Nbd–first we will move Ynod Devs to beneath Zookd Devs, and back out. This will
briefly give abe full privileges to Y3L4 Proj., among others.>>> list(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]
>>> orgs['Zookd Devs'].parts.insert(registry.getId(orgs['Ynod Devs']))
1
>>> catalog.index(orgs['Zookd Devs'])
>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
>>> orgs['Zookd Devs'].parts.remove(registry.getId(orgs['Ynod Devs']))
>>> catalog.index(orgs['Zookd Devs'])
>>> list(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]As another example, we will change the roles abe has, and see that it is
propagated down to Zookd Nbd.>>> rels = list(catalog.findRelations(t(
... {'principal_id': 'abe',
... 'organization': orgs['Zookd Corp Management']})))
>>> len(rels)
1
>>> rels[0].role_ids.remove('reviewer')
>>> catalog.index(rels[0])>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'user manager', 'writer']
>>> list(res)
['publisher', 'user manager', 'writer']Note that search index order matters. In our case, our intransitive search
index is relying on our transitive index, so the transitive index needs to
come first. You want transitive relation indexes before name. Right now,
you are in charge of this order: it will be difficult to come up with a
reliable algorithm for guessing this.Listeners, Catalog Administration, and Joining Across Relation CatalogsWe’ve done all of our examples so far with a single catalog that indexes
both kinds of relations. What if we want to have two catalogs with
homogenous collections of relations? That can feel cleaner, but it also
introduces some new wrinkles.Let’s use our current catalog for organizations, removing the extra
information; and create a new one for roles.>>> role_catalog = root['role_catalog'] = catalog.copy()
>>> transaction.commit()
>>> org_catalog = catalog
>>> del catalogWe’ll need a slightly different query factory and a slightly different
search indexgetValueTokensfunction. We’ll write those, then modify the
configuration of our two catalogs for the new world.The transitive factory we write here is for the role catalog. It needs
access to the organzation catalog. We could do this a variety of
ways–relying on a utility, or finding the catalog from context. We will
make the role_catalog have a .org_catalog attribute, and rely on that.>>> role_catalog.org_catalog = org_catalog
>>> def factory3(query, catalog):
... if (len(query) == 2 and
... 'organization' in query and
... 'principal_id' in query):
... def getQueries(relchain):
... if not relchain:
... res = BTrees.family32.OO.Bucket(query)
... org_id = query['organization']
... if org_id is not None:
... res['organization'] = zc.relation.catalog.any(
... org_id,
... *catalog.org_catalog.findRelationTokens(
... {'part': org_id}))
... yield res
... return getQueries
...>>> def getValueTokens2(index, name, token, catalog, source,
... additions, removals, removed):
... is_role_catalog = catalog is index.catalog # role_catalog
... if name == 'organization':
... if is_role_catalog:
... orgs = set(source.get('organization') or
... index.catalog.getValueTokens(
... 'organization', token) or ())
... else:
... orgs = set((token,))
... orgs.update(removals.get('part', ()))
... orgs.update(index.catalog.org_catalog.findValueTokens(
... 'part',
... {None: zc.relation.catalog.Any(
... t for t in orgs if t is not None)}))
... return orgs
... elif name == 'principal_id':
... # we only want custom behavior if this is an organization
... if not is_role_catalog:
... orgs = set((token,))
... orgs.update(index.catalog.org_catalog.findRelationTokens(
... {'part': token}))
... return set(index.catalog.findValueTokens(
... 'principal_id', {
... 'organization': zc.relation.catalog.Any(orgs)}))
... return ''If you are following along in the code and comparing to the originals, you may
see that this approach is a bit cleaner than the one when the relations were
in the same catalog.Now we will fix up the the organization catalog[20].[20]Before we modify them, let’s look at the copy we made.
The copy should currently behave identically to the original.>>> len(org_catalog)
38
>>> len(role_catalog)
38
>>> indexed = list(org_catalog)
>>> len(indexed)
38
>>> orgs['Zookd Devs'] in indexed
True
>>> for r in indexed:
... if r not in role_catalog:
... print('bad')
... break
... else:
... print('good')
...
good
>>> org_names = set(dir(org_catalog))
>>> role_names = set(dir(role_catalog))
>>> sorted(org_names - role_names)
[]
>>> sorted(role_names - org_names)
['org_catalog']>>> def checkYnodDevsParts(catalog):
... res = sorted(catalog.findRelations(t({None: orgs['Ynod Devs']})))
... if res != [
... orgs["Bet Proj"], orgs["Y3L4 Proj"], orgs["Ynod Devs"]]:
... print("bad", res)
...
>>> checkYnodDevsParts(org_catalog)
>>> checkYnodDevsParts(role_catalog)>>> def checkOpheliaRoles(catalog):
... res = sorted(catalog.findRelations({'principal_id': 'ophelia'}))
... if repr(res) != (
... "[<Roles instance (ophelia has publisher in Zookd Nbd)>, " +
... "<Roles instance (ophelia has reviewer in Zookd Devs)>, " +
... "<Roles instance (ophelia has writer in " +
... "Zookd Corp Management)>]"):
... print("bad", res)
...
>>> checkOpheliaRoles(org_catalog)
>>> checkOpheliaRoles(role_catalog)>>> def checkOpheliaWriterOrganizations(catalog):
... res = sorted(catalog.findRelations({None: zc.relation.catalog.Any(
... catalog.findValueTokens(
... 'organization', {'principal_id': 'ophelia',
... 'role_id': 'writer'}))}))
... if repr(res) != (
... '[<Organization instance "Ynod Zookd Task Force">, ' +
... '<Organization instance "Zookd Admins">, ' +
... '<Organization instance "Zookd Corp Management">, ' +
... '<Organization instance "Zookd Devs">, ' +
... '<Organization instance "Zookd Nbd">, ' +
... '<Organization instance "Zookd SAs">, ' +
... '<Organization instance "Zookd hOgnmd">]'):
... print("bad", res)
...
>>> checkOpheliaWriterOrganizations(org_catalog)
>>> checkOpheliaWriterOrganizations(role_catalog)>>> def checkPrincipalsWithRolesInZookdDevs(catalog):
... org_id = registry.getId(orgs['Zookd Devs'])
... res = sorted(catalog.findValueTokens(
... 'principal_id',
... {'organization': zc.relation.catalog.any(
... org_id, *catalog.findRelationTokens({'part': org_id}))}))
... if res != ['abe', 'ignas', 'karyn', 'lettie', 'nancy', 'ophelia']:
... print("bad", res)
...
>>> checkPrincipalsWithRolesInZookdDevs(org_catalog)
>>> checkPrincipalsWithRolesInZookdDevs(role_catalog)>>> def checkOpheliaRolesInZookdNbd(catalog):
... res = sorted(catalog.findValueTokens(
... 'role_id', {
... 'organization': registry.getId(orgs['Zookd Nbd']),
... 'principal_id': 'ophelia'}))
... if res != ['publisher', 'reviewer', 'writer']:
... print("bad", res)
...
>>> checkOpheliaRolesInZookdNbd(org_catalog)
>>> checkOpheliaRolesInZookdNbd(role_catalog)>>> def checkAbeRolesInZookdNbd(catalog):
... res = sorted(catalog.findValueTokens(
... 'role_id', {
... 'organization': registry.getId(orgs['Zookd Nbd']),
... 'principal_id': 'abe'}))
... if res != ['publisher', 'user manager', 'writer']:
... print("bad", res)
...
>>> checkAbeRolesInZookdNbd(org_catalog)
>>> checkAbeRolesInZookdNbd(role_catalog)
>>> org_catalog.removeDefaultQueryFactory(None) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
LookupError: ('factory not found', None)>>> org_catalog.removeValueIndex('organization')
>>> org_catalog.removeValueIndex('role_id')
>>> org_catalog.removeValueIndex('principal_id')
>>> org_catalog.removeDefaultQueryFactory(factory2)
>>> org_catalog.removeSearchIndex(index)
>>> org_catalog.clear()
>>> len(org_catalog)
0
>>> for v in orgs.values():
... org_catalog.index(v)This also shows using theremoveDefaultQueryFactoryandremoveSearchIndexmethods[21].[21]You get errors by removing query
factories that are not registered.>>> org_catalog.removeDefaultQueryFactory(factory2) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
LookupError: ('factory not found', <function factory2 at ...>)Now we will set up the role catalog[22].[22]Changes to one copy should not affect the other. That
means the role_catalog should still work as before.>>> len(org_catalog)
13
>>> len(list(org_catalog))
13>>> len(role_catalog)
38
>>> indexed = list(role_catalog)
>>> len(indexed)
38
>>> orgs['Zookd Devs'] in indexed
True
>>> orgs['Zookd Devs'] in role_catalog
True>>> checkYnodDevsParts(role_catalog)
>>> checkOpheliaRoles(role_catalog)
>>> checkOpheliaWriterOrganizations(role_catalog)
>>> checkPrincipalsWithRolesInZookdDevs(role_catalog)
>>> checkOpheliaRolesInZookdNbd(role_catalog)
>>> checkAbeRolesInZookdNbd(role_catalog)>>> role_catalog.removeValueIndex('part')
>>> for ix in list(role_catalog.iterSearchIndexes()):
... role_catalog.removeSearchIndex(ix)
...
>>> role_catalog.removeDefaultQueryFactory(factory1)
>>> role_catalog.removeDefaultQueryFactory(factory2)
>>> role_catalog.addDefaultQueryFactory(factory3)
>>> root['index2'] = index2 = zc.relation.searchindex.Intransitive(
... ('organization', 'principal_id'), 'role_id', factory3,
... getValueTokens2,
... ('organization', 'principal_id', 'part', 'role_id'),
... unlimitedDepth=True)
>>> role_catalog.addSearchIndex(index2)The new role_catalog index needs to be updated from the org_catalog.
We’ll set that up using listeners, a new concept.>>> org_catalog.addListener(index2)
>>> list(org_catalog.iterListeners()) == [index2]
TrueNow the role_catalog should be able to answer the same questions as the old
single catalog approach.>>> t = role_catalog.tokenizeQuery
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'})))
['publisher', 'user manager', 'writer']>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'})))
['publisher', 'reviewer', 'writer']We can also make changes to both catalogs and the search indexes are
maintained.>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]
>>> orgs['Zookd Devs'].parts.insert(registry.getId(orgs['Ynod Devs']))
1
>>> org_catalog.index(orgs['Zookd Devs'])
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
['publisher', 'user manager', 'writer']
>>> orgs['Zookd Devs'].parts.remove(registry.getId(orgs['Ynod Devs']))
>>> org_catalog.index(orgs['Zookd Devs'])
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]>>> rels = list(role_catalog.findRelations(t(
... {'principal_id': 'abe',
... 'organization': orgs['Zookd Corp Management']})))
>>> len(rels)
1
>>> rels[0].role_ids.insert('reviewer')
1
>>> role_catalog.index(rels[0])>>> res = role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']Here we add a new organization.>>> orgs['Zookd hOnc'] = org = Organization('Zookd hOnc')
>>> orgs['Zookd Devs'].parts.insert(registry.getId(org))
1
>>> org_catalog.index(orgs['Zookd hOnc'])
>>> org_catalog.index(orgs['Zookd Devs'])>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd hOnc'],
... 'principal_id': 'abe'})))
['publisher', 'reviewer', 'user manager', 'writer']>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd hOnc'],
... 'principal_id': 'ophelia'})))
['reviewer', 'writer']Now we’ll remove it.>>> orgs['Zookd Devs'].parts.remove(registry.getId(org))
>>> org_catalog.index(orgs['Zookd Devs'])
>>> org_catalog.unindex(orgs['Zookd hOnc'])TODO make sure that intransitive copy looks the way we expect[23][23]You can add listeners multiple times.>>> org_catalog.addListener(index2)
>>> list(org_catalog.iterListeners()) == [index2, index2]
TrueNow we will remove the listeners, to show we can.>>> org_catalog.removeListener(index2)
>>> org_catalog.removeListener(index2)
>>> org_catalog.removeListener(index2)
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
LookupError: ('listener not found',
<zc.relation.searchindex.Intransitive object at ...>)
>>> org_catalog.removeListener(None)
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
LookupError: ('listener not found', None)Here’s the same for removing a search index we don’t have>>> org_catalog.removeSearchIndex(index2)
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
LookupError: ('index not found',
<zc.relation.searchindex.Intransitive object at ...>)[24]In “2001: A Space Odyssey”, many people believe the name HAL
was chosen because it was ROT25 of IBM…. I cheat a bit sometimes and
use ROT1 because the result sounds better.Working with Search Indexes: zc.relation Catalog Extended ExampleIntroductionThis document assumes you have read the README.rst document, and want to learn
a bit more by example. In it, we will explore a set of relations that
demonstrates most of the aspects of working with search indexes and listeners.
It will not explain the topics that the other documents already addressed. It
also describes an advanced use case.As we have seen in the other documents, the relation catalog supports
search indexes. These can return the results of any search, as desired.
Of course, the intent is that you supply an index that optimizes the
particular searches it claims.The searchindex module supplies a few search indexes, optimizing
specified transitive and intransitive searches. We have seen them working
in other documents. We will examine them more in depth in this document.Search indexes update themselves by receiving messages via a “listener”
interface. We will also look at how this works.The example described in this file examines a use case similar to that in
the zc.revision or zc.vault packages: a relation describes a graph of
other objects. Therefore, this is our first concrete example of purely
extrinsic relations.Let’s build the example story a bit. Imagine we have a graph, often a
hierarchy, of tokens–integers. Relations specify that a given integer
token relates to other integer tokens, with a containment denotation or
other meaning.The integers may also have relations that specify that they represent an
object or objects.This allows us to have a graph of objects in which changing one part of the
graph does not require changing the rest. zc.revision and zc.vault thus
are able to model graphs that can have multiple revisions efficiently and
with quite a bit of metadata to support merges.Let’s imagine a simple hierarchy. The relation has atokenattribute
and achildrenattribute; children point to tokens. Relations will
identify themselves with ids.>>> import BTrees
>>> relations = BTrees.family64.IO.BTree()
>>> relations[99] = None # just to give us a start>>> class Relation(object):
... def __init__(self, token, children=()):
... self.token = token
... self.children = BTrees.family64.IF.TreeSet(children)
... self.id = relations.maxKey() + 1
... relations[self.id] = self
...>>> def token(rel, self):
... return rel.token
...
>>> def children(rel, self):
... return rel.children
...
>>> def dumpRelation(obj, index, cache):
... return obj.id
...
>>> def loadRelation(token, index, cache):
... return relations[token]
...The standard TransposingTransitiveQueriesFactory will be able to handle this
quite well, so we’ll use that for our index.>>> import zc.relation.queryfactory
>>> factory = zc.relation.queryfactory.TransposingTransitive(
... 'token', 'children')
>>> import zc.relation.catalog
>>> catalog = zc.relation.catalog.Catalog(
... dumpRelation, loadRelation, BTrees.family64.IO, BTrees.family64)
>>> catalog.addValueIndex(token)
>>> catalog.addValueIndex(children, multiple=True)
>>> catalog.addDefaultQueryFactory(factory)Now let’s quickly create a hierarchy and index it.>>> for token, children in (
... (0, (1, 2)), (1, (3, 4)), (2, (10, 11, 12)), (3, (5, 6)),
... (4, (13, 14)), (5, (7, 8, 9)), (6, (15, 16)), (7, (17, 18, 19)),
... (8, (20, 21, 22)), (9, (23, 24)), (10, (25, 26)),
... (11, (27, 28, 29, 30, 31, 32))):
... catalog.index(Relation(token, children))
...[25]That hierarchy is arbitrary. Here’s what we have, in terms of tokens
pointing to children:_____________0_____________
/ \
________1_______ ______2____________
/ \ / | \
______3_____ _4_ 10 ____11_____ 12
/ \ / \ / \ / / | \ \ \
_______5_______ 6 13 14 25 26 27 28 29 30 31 32
/ | \ / \
_7_ _8_ 9 15 16
/ | \ / | \ / \
17 18 19 20 21 22 23 24Twelve relations, with tokens 0 through 11, and a total of 33 tokens,
including children. The ids for the 12 relations are 100 through 111,
corresponding with the tokens of 0 through 11.Without a transitive search index, we can get all transitive results.
The results are iterators.>>> res = catalog.findRelationTokens({'token': 0})
>>> getattr(res, '__next__') is None
False
>>> getattr(res, '__len__', None) is None
True
>>> sorted(res)
[100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111]
>>> list(res)
[]>>> res = catalog.findValueTokens('children', {'token': 0})
>>> sorted(res) == list(range(1, 33))
True
>>> list(res)
[][26]canFindalso can work transitively, and will
use transitive search indexes, as we’ll see below.>>> catalog.canFind({'token': 1}, targetQuery={'children': 23})
True
>>> catalog.canFind({'token': 2}, targetQuery={'children': 23})
False
>>> catalog.canFind({'children': 23}, targetQuery={'token': 1})
True
>>> catalog.canFind({'children': 23}, targetQuery={'token': 2})
FalsefindRelationTokenChainswon’t change, but we’ll include it in the
discussion and examples to show that.>>> res = catalog.findRelationTokenChains({'token': 2})
>>> chains = list(res)
>>> len(chains)
3
>>> len(list(res))
0Transitive Search IndexesNow we can add a couple of transitive search index. We’ll talk about
them a bit first.There is currently one variety of transitive index, which indexes
relation and value searches for the transposing transitive query
factory.The index can only be used under certain conditions.The search is not a request for a relation chain.It does not specify a maximum depth.Filters are not used.If it is a value search, then specific value indexes cannot be used if a
target filter or target query are used, but the basic relation index can
still be used in that case.The usage of the search indexes is largely transparent: set them up, and
the relation catalog will use them for the same API calls that used more
brute force previously. The only difference from external uses is that
results that use an index will usually be a BTree structure, rather than
an iterator.When you add a transitive index for a relation, you must specify the
transitive name (or names) of the query, and the same for the reverse.
That’s all we’ll do now.>>> import zc.relation.searchindex
>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... 'token', 'children', names=('children',)))Now we should have a search index installed.Notice that we went from parent (token) to child: this index is primarily
designed for helping transitive membership searches in a hierarchy. Using it to
index parents would incur a lot of write expense for not much win.There’s just a bit more you can specify here: static fields for a query
to do a bit of filtering. We don’t need any of that for this example.Now how does the catalog use this index for searches? Three basic ways,
depending on the kind of search, relations, values, orcanFind.
Before we start looking into the internals, let’s verify that we’re getting
what we expect: correct answers, and not iterators, but BTree structures.>>> res = catalog.findRelationTokens({'token': 0})
>>> list(res)
[100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111]
>>> list(res)
[100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111]>>> res = catalog.findValueTokens('children', {'token': 0})
>>> list(res) == list(range(1, 33))
True
>>> list(res) == list(range(1, 33))
True>>> catalog.canFind({'token': 1}, targetQuery={'children': 23})
True
>>> catalog.canFind({'token': 2}, targetQuery={'children': 23})
False[27]Note that the last twocanFindexamples from
when we went through these examples without an index do not use the
index, so we don’t show them here: they look the wrong direction for
this index.So how do these results happen?The first,findRelationTokens, and the last,canFind, are the most
straightforward. The index finds all relations that match the given
query, intransitively. Then for each relation, it looks up the indexed
transitive results for that token. The end result is the union of all
indexed results found from the intransitive search.canFindsimply
casts the result into a boolean.findValueTokensis the same story as above with only one more step. After
the union of relations is calculated, the method returns the union of the
sets of the requested value for all found relations.It will maintain itself when relations are reindexed.>>> rel = list(catalog.findRelations({'token': 11}))[0]
>>> for t in (27, 28, 29, 30, 31):
... rel.children.remove(t)
...
>>> catalog.index(rel)>>> catalog.findValueTokens('children', {'token': 0})
... # doctest: +NORMALIZE_WHITESPACE
LFSet([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 32])
>>> catalog.findValueTokens('children', {'token': 2})
LFSet([10, 11, 12, 25, 26, 32])
>>> catalog.findValueTokens('children', {'token': 11})
LFSet([32])>>> rel.children.remove(32)
>>> catalog.index(rel)>>> catalog.findValueTokens('children', {'token': 0})
... # doctest: +NORMALIZE_WHITESPACE
LFSet([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26])
>>> catalog.findValueTokens('children', {'token': 2})
LFSet([10, 11, 12, 25, 26])
>>> catalog.findValueTokens('children', {'token': 11})
LFSet([])>>> rel.children.insert(27)
1
>>> catalog.index(rel)>>> catalog.findValueTokens('children', {'token': 0})
... # doctest: +NORMALIZE_WHITESPACE
LFSet([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27])
>>> catalog.findValueTokens('children', {'token': 2})
LFSet([10, 11, 12, 25, 26, 27])
>>> catalog.findValueTokens('children', {'token': 11})
LFSet([27])When the index is copied, the search index is copied.>>> new = catalog.copy()
>>> res = list(new.iterSearchIndexes())
>>> len(res)
1
>>> new_index = res[0]
>>> res = list(catalog.iterSearchIndexes())
>>> len(res)
1
>>> old_index = res[0]
>>> new_index is old_index
False
>>> old_index.index is new_index.index
False
>>> list(old_index.index.keys()) == list(new_index.index.keys())
True
>>> from __future__ import print_function
>>> for key, value in old_index.index.items():
... v = new_index.index[key]
... if v is value or list(v) != list(value):
... print('oops', key, value, v)
... break
... else:
... print('good')
...
good
>>> old_index.names is not new_index.names
True
>>> list(old_index.names) == list(new_index.names)
True
>>> for name, old_ix in old_index.names.items():
... new_ix = new_index.names[name]
... if new_ix is old_ix or list(new_ix.keys()) != list(old_ix.keys()):
... print('oops')
... break
... for key, value in old_ix.items():
... v = new_ix[key]
... if v is value or list(v) != list(value):
... print('oops', name, key, value, v)
... break
... else:
... continue
... break
... else:
... print('good')
...
goodHelpersWhen writing search indexes and query factories, you often want complete
access to relation catalog data. We’ve seen a number of these tools already:getRelationModuleToolsgets a dictionary of the BTree tools needed to
work with relations.>>> sorted(catalog.getRelationModuleTools().keys())
... # doctest: +NORMALIZE_WHITESPACE
['BTree', 'Bucket', 'Set', 'TreeSet', 'difference', 'dump',
'intersection', 'load', 'multiunion', 'union']‘multiunion’ is only there if the BTree is an I* or L* module.
Use the zc.relation.catalog.multiunion helper function to do the
best union you can for a given set of tools.getValueModuleToolsdoes the same for indexed values.>>> tools = set(('BTree', 'Bucket', 'Set', 'TreeSet', 'difference',
... 'dump', 'intersection', 'load', 'multiunion', 'union'))
>>> tools.difference(catalog.getValueModuleTools('children').keys()) == set()
True>>> tools.difference(catalog.getValueModuleTools('token').keys()) == set()
TruegetRelationTokenscan return all of the tokens in the catalog.>>> len(catalog.getRelationTokens()) == len(catalog)
TrueThis also happens to be equivalent tofindRelationTokenswith an empty
query.>>> catalog.getRelationTokens() is catalog.findRelationTokens({})
TrueIt also can return all the tokens that match a given query, or None if
there are no matches.>>> catalog.getRelationTokens({'token': 0}) # doctest: +ELLIPSIS
<BTrees.LOBTree.LOTreeSet object at ...>
>>> list(catalog.getRelationTokens({'token': 0}))
[100]This also happens to be equivalent tofindRelationTokenswith a query,
a maxDepth of 1, and no other arguments.>>> catalog.findRelationTokens({'token': 0}, maxDepth=1) is (
... catalog.getRelationTokens({'token': 0}))
TrueExcept that if there are no matches,findRelationTokensreturns an empty
set (so italwaysreturns an iterable).>>> catalog.findRelationTokens({'token': 50}, maxDepth=1)
LOSet([])
>>> print(catalog.getRelationTokens({'token': 50}))
NonegetValueTokenscan return all of the tokens for a given value name inthe catalog.>>> list(catalog.getValueTokens('token')) == list(range(12))
TrueThis is identical to catalog.findValueTokens with a name only (or with
an empty query, and a maxDepth of 1).>>> list(catalog.findValueTokens('token')) == list(range(12))
True
>>> catalog.findValueTokens('token') is catalog.getValueTokens('token')
TrueIt can also return the values for a given token.>>> list(catalog.getValueTokens('children', 100))
[1, 2]This is identical to catalog.findValueTokens with a name and a query of
{None: token}.>>> list(catalog.findValueTokens('children', {None: 100}))
[1, 2]
>>> catalog.getValueTokens('children', 100) is (
... catalog.findValueTokens('children', {None: 100}))
TrueExcept that if there are no matches,findValueTokensreturns an empty
set (so italwaysreturns an iterable); while getValueTokens will
return None if the relation has no values (or the relation is unknown).>>> catalog.findValueTokens('children', {None: 50}, maxDepth=1)
LFSet([])
>>> print(catalog.getValueTokens('children', 50))
None>>> rel.children.remove(27)
>>> catalog.index(rel)
>>> catalog.findValueTokens('children', {None: rel.id}, maxDepth=1)
LFSet([])
>>> print(catalog.getValueTokens('children', rel.id))
NoneyieldRelationTokenChainsis a search workhorse for searches that use aquery factory. TODO: describe.[25]The query factory knows when it is not needed–not only
when neither of its names are used, but also when both of its names are
used.>>> list(catalog.findRelationTokens({'token': 0, 'children': 1}))
[100][26]When values are the same as their tokens,findValuesreturns the same result asfindValueTokens. Here
we see this without indexes.>>> list(catalog.findValueTokens('children', {'token': 0})) == list(
... catalog.findValues('children', {'token': 0}))
True[27]Again, when values are the same as their tokens,findValuesreturns the same result asfindValueTokens. Here
we see this with indexes.>>> list(catalog.findValueTokens('children', {'token': 0})) == list(
... catalog.findValues('children', {'token': 0}))
TrueOptimizing Relation Catalog UseThere are several best practices and optimization opportunities in regards to
the catalog.Use integer-keyed BTree sets when possible. They can use the BTrees’multiunionfor a speed boost. Integers’ __cmp__ is reliable, and in C.Never use persistent objects as keys. They will cause a database load every
time you need to look at them, they take up memory and object caches, and
they (as of this writing) disable conflict resolution. Intids (or similar)
are your best bet for representing objects, and some other immutable such as
strings are the next-best bet, and zope.app.keyreferences (or similar) are
after that.Use multiple-token values in your queries when possible, especially in your
transitive query factories.Use the cache when you are loading and dumping tokens, and in your
transitive query factories.When possible, don’t load or dump tokens (the values themselves may be used
as tokens). This is especially important when you have multiple tokens:
store them in a BTree structure in the same module as the zc.relation module
for the value.For some operations, particularly with hundreds or thousands of members in a
single relation value, some of these optimizations can speed up some
common-case reindexing work by around 100 times.The easiest (and perhaps least useful) optimization is that all dump
calls and all load calls generated by a single operation share a cache
dictionary per call type (dump/load), per indexed relation value.
Therefore, for instance, we could stash an intids utility, so that we
only had to do a utility lookup once, and thereafter it was only a
single dictionary lookup. This is what the defaultgenerateTokenandresolveTokenfunctions in zc.relationship’s index.py do: look at them
for an example.A further optimization is to not load or dump tokens at all, but use values
that may be tokens. This will be particularly useful if the tokens have
__cmp__ (or equivalent) in C, such as built-in types like ints. To specify
this behavior, you create an index with the ‘load’ and ‘dump’ values for the
indexed attribute descriptions explicitly set to None.>>> import zope.interface
>>> class IRelation(zope.interface.Interface):
... subjects = zope.interface.Attribute(
... 'The sources of the relation; the subject of the sentence')
... relationtype = zope.interface.Attribute(
... '''unicode: the single relation type of this relation;
... usually contains the verb of the sentence.''')
... objects = zope.interface.Attribute(
... '''the targets of the relation; usually a direct or
... indirect object in the sentence''')
...>>> import BTrees
>>> relations = BTrees.family32.IO.BTree()
>>> relations[99] = None # just to give us a start>>> @zope.interface.implementer(IRelation)
... class Relation(object):
...
... def __init__(self, subjects, relationtype, objects):
... self.subjects = subjects
... assert relationtype in relTypes
... self.relationtype = relationtype
... self.objects = objects
... self.id = relations.maxKey() + 1
... relations[self.id] = self
... def __repr__(self):
... return '<%r %s %r>' % (
... self.subjects, self.relationtype, self.objects)>>> def token(rel, self):
... return rel.token
...
>>> def children(rel, self):
... return rel.children
...
>>> def dumpRelation(obj, index, cache):
... return obj.id
...
>>> def loadRelation(token, index, cache):
... return relations[token]
...>>> relTypes = ['has the role of']
>>> def relTypeDump(obj, index, cache):
... assert obj in relTypes, 'unknown relationtype'
... return obj
...
>>> def relTypeLoad(token, index, cache):
... assert token in relTypes, 'unknown relationtype'
... return token
...>>> import zc.relation.catalog
>>> catalog = zc.relation.catalog.Catalog(
... dumpRelation, loadRelation)
>>> catalog.addValueIndex(IRelation['subjects'], multiple=True)
>>> catalog.addValueIndex(
... IRelation['relationtype'], relTypeDump, relTypeLoad,
... BTrees.family32.OI, name='reltype')
>>> catalog.addValueIndex(IRelation['objects'], multiple=True)
>>> import zc.relation.queryfactory
>>> factory = zc.relation.queryfactory.TransposingTransitive(
... 'subjects', 'objects')
>>> catalog.addDefaultQueryFactory(factory)>>> rel = Relation((1,), 'has the role of', (2,))
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 1}))
[2]If you have single relations that relate hundreds or thousands of
objects, it can be a huge win if the value is a ‘multiple’ of the same
type as the stored BTree for the given attribute. The default BTree
family for attributes is IFBTree; IOBTree is also a good choice, and may
be preferrable for some applications.>>> catalog.unindex(rel)
>>> rel = Relation(
... BTrees.family32.IF.TreeSet((1,)), 'has the role of',
... BTrees.family32.IF.TreeSet())
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 1}))
[]
>>> list(catalog.findValueTokens('subjects', {'objects': None}))
[1]Reindexing is where some of the big improvements can happen. The following
gyrations exercise the optimization code.>>> rel.objects.insert(2)
1
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 1}))
[2]
>>> rel.subjects = BTrees.family32.IF.TreeSet((3,4,5))
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 3}))
[2]>>> rel.subjects.insert(6)
1
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 6}))
[2]>>> rel.subjects.update(range(100, 200))
100
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 100}))
[2]>>> rel.subjects = BTrees.family32.IF.TreeSet((3,4,5,6))
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 3}))
[2]>>> rel.subjects = BTrees.family32.IF.TreeSet(())
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 3}))
[]>>> rel.subjects = BTrees.family32.IF.TreeSet((3,4,5))
>>> catalog.index(rel)
>>> list(catalog.findValueTokens('objects', {'subjects': 3}))
[2]tokenizeValues and resolveValueTokens work correctly without loaders and
dumpers–that is, they do nothing.>>> catalog.tokenizeValues((3,4,5), 'subjects')
(3, 4, 5)
>>> catalog.resolveValueTokens((3,4,5), 'subjects')
(3, 4, 5)Changes2.0 (2023-04-05)Drop support for Python 2.7, 3.5, 3.6.
[ale-rt]Fix the dependency on the ZODB, we just need to depend on the BTrees package.
Refs. #11.
[ale-rt]1.2 (2023-03-28)Adapt code for PEP-479 (Change StopIteration handling inside generators).
See:https://peps.python.org/pep-0479.
Fixes #11.
[ale-rt]1.1.post2 (2018-06-18)Another attempt to fix PyPI page by using correct expected metadata syntax.1.1.post1 (2018-06-18)Fix PyPI page by using correct ReST syntax.1.1 (2018-06-15)Add support for Python 3.5 and 3.6.1.0 (2008-04-23)This is the initial release of the zc.relation package. However, it
represents a refactoring of another package, zc.relationship. This
package contains only a modified version of the relation(ship) index,
now called a catalog. The refactored version of zc.relationship index
relies on (subclasses) this catalog. zc.relationship also maintains a
backwards-compatible subclass.This package only relies on the ZODB, zope.interface, and zope.testing
software, and can be used inside or outside of a standard ZODB database.
The software does have to be there, though (the package relies heavily
on the ZODB BTrees package).If you would like to switch a legacy zc.relationship index to a
zc.relation catalog, try this trick in your generations script.
Assuming the old index isold, the following line should create
a new zc.relation catalog with your legacy data:>>> new = old.copy(zc.relation.Catalog)Why is the same basic data structure called a catalog now? Because we
exposed the ability to mutate the data structure, and what you are really
adding and removing are indexes. It didn’t make sense to put an index in
an index, but it does make sense to put an index in a catalog. Thus, a
name change was born.The catalog in this package has several incompatibilities from the earlier
zc.relationship index, and many new features. The zc.relationship package
maintains a backwards-compatible subclass. The following discussion
compares the zc.relation catalog with the zc.relationship 1.x index.Incompatibilities with zc.relationship 1.x indexThe two big changes are that method names now refer toRelationrather
thanRelationship; and the catalog is instantiated slightly differently
from the index. A few other changes are worth your attention. The
following list attempts to highlight all incompatibilities.Big incompatibilities:findRelationshipTokenSetandfindValueTokenSetare renamed, with
some slightly different semantics, asgetRelationTokensandgetValueTokens. The exact same result asfindRelationTokenSet(query)can be obtained withfindRelationTokens(query, 1)(where 1 is maxDepth). The same
result asfindValueTokenSet(reltoken, name)can be obtained withfindValueTokens(name, {zc.relation.RELATION: reltoken}, 1).findRelationsreplacesfindRelatonships. The new method will use
the defaultTransitiveQueriesFactory if it is set and maxDepth is not 1.
It shares the call signature offindRelationChains.isLinkedis nowcanFind.The catalog instantiation arguments have changed from the old index.loadanddump(formerlyloadRelanddumpRel,
respectively) are now required arguments for instantiation.The only other optional arguments arebtree(wasrelFamily) andfamily. You now specify what elements to index withaddValueIndexNote also thataddValueIndexdefaults to no load and dump function,
unlike the old instantiation options.query factories are different. SeeIQueryFactoryin the interfaces.they first get (query, catalog, cache) and then return a getQueries
callable that gets relchains and yields queries; OR None if they
don’t match.They must also handle an empty relchain. Typically this should
return the original query, but may also be used to mutate the
original query.They are no longer thought of as transitive query factories, but as
general query mutators.Medium:The catalog no longer inherits from
zope.app.container.contained.Contained.The index requires ZODB 3.8 or higher.Small:deactivateSetsis no longer an instantiation option (it was broken
because of a ZODB bug anyway, as had been described in the
documentation).Changes and new featuresThe catalog now offers the ability to index certain
searches. The indexes must be explicitly instantiated and registered
you want to optimize. This can be used when searching for values, when
searching for relations, or when determining if two objects are
linked. It cannot be used for relation chains. Requesting an index
has the usual trade-offs of greater storage space and slower write
speed for faster search speed. Registering a search index is done
after instantiation time; you can iteratate over the current settings
used, and remove them. (The code path expects to support legacy
zc.relationship index instances for all of these APIs.)You can now specify new values after the catalog has been created, iterate
over the settings used, and remove values.The catalog has a copy method, to quickly make new copies without actually
having to reindex the relations.query arguments can now specify multiple values for a given name by
using zc.relation.catalog.any(1, 2, 3, 4) or
zc.relation.catalog.Any((1, 2, 3, 4)).The catalog supports specifying indexed values by passing callables rather
than interface elements (which are also still supported).findRelationsand new methodfindRelationTokenscan find
relations transitively and intransitively.findRelationTokenswhen used intransitively repeats the legacy zc.relationship index
behavior offindRelationTokenSet.
(findRelationTokenSetremains in the API, not deprecated, a companion
tofindValueTokenSet.)in findValues and findValueTokens,queryargument is now optional. If
the query evaluates to False in a boolean context, all values, or value
tokens, are returned. Value tokens are explicitly returned using the
underlying BTree storage. This can then be used directly for other BTree
operations.Completely new docs. Unfortunately, still really not good enough.The package has drastically reduced direct dependecies from zc.relationship:
it is now more clearly a ZODB tool, with no other Zope dependencies than
zope.testing and zope.interface.Listeners allow objects to listen to messages from the catalog (which can
be used directly or, for instance, to fire off events).You can search for relations, using a key of zc.relation.RELATION…which is
really an alias for None. Sorry. But hey, use the constant! I think it is
more readable.tokenizeQuery (and resolveQuery) now accept keyword arguments as an
alternative to a normal dict query. This can make constructing the query
a bit more attractive (i.e.,query = catalog.tokenizeQuery; res =catalog.findValues('object',query(subject=joe, predicate=OWNS))). |
zc.relationship | The zc.relationship package currently contains two main types of
components: a relationship index, and some relationship containers.
Both are designed to be used within the ZODB, although the index is
flexible enough to be used in other contexts. They share the model that
relationships are full-fledged objects that are indexed for optimized
searches. They also share the ability to perform optimized intransitive
and transitive relationship searches, and to support arbitrary filter
searches on relationship tokens.The index is a very generic component that can be used to optimize searches
for N-ary relationships, can be used standalone or within a catalog, can be
used with pluggable token generation schemes, and generally tries to provide
a relatively policy-free tool. It is expected to be used primarily as an
engine for more specialized and constrained tools and APIs.The relationship containers use the index to manage two-way
relationships, using a derived mapping interface. It is a reasonable
example of the index in standalone use.Another example, using the container model but supporting five-way
relationships (“sources”, “targets”, “relation”, “getContext”, “state”), can
be found in plone.relations. Its README is a good read.http://dev.plone.org/plone/browser/plone.relations/trunk/plone/relationsThis current document describes the relationship index. See
container.rst for documentation of the relationship container.PLEASE NOTE: the index in zc.relationship, described below, now exists for
backwards compatibility. zc.relation.catalog now contains the most recent,
backward-incompatible version of the index code.IndexContentsIndexOverviewSimplest ExampleStarting the N-Way ExamplesToken conversionBasic searchingAn even simpler exampleSearching for empty setsWorking with transitive searchesDetecting cyclesTransitively mapping multiple elementsLies, damn lies, and statisticsReindexing and removing relationshipsOptimizing relationship index use__contains__ and UnindexingRelationshipContainerAdvanced UsageSearch FiltersMultiple Sources and/or Targets; Duplicate RelationshipsRelating Relationships and Relationship ContainersExposing Unresolved TokensConvenience classesOne-To-One RelationshipManyToOneRelationshipOneToManyRelationshipChanges2.1 (2021-03-22)2.0.post1 (2018-06-19)2.0 (2018-06-19)New RequirementsIncompatibilities with 1.0Changes in 2.0Branch 1.11.1.0Branch 1.01.0.21.0.11.0.0OverviewThe index takes a very precise view of the world: instantiation requires
multiple arguments specifying the configuration; and using the index
requires that you acknowledge that the relationships and their
associated indexed values are usually tokenized within the index. This
precision trades some ease-of-use for the possibility of flexibility,
power, and efficiency. That said, the index’s API is intended to be
consistent, and to largely adhere to “there’s only one way to do it”[11].Simplest ExampleBefore diving into the N-way flexibility and the other more complex
bits, then, let’s have a quick basic demonstration: a two way
relationship from one value to another. This will give you a taste of
the relationship index, and let you use it reasonably well for
light-to-medium usage. If you are going to use more of its features or
use it more in a potentially high-volume capacity, please consider
trying to understand the entire document.Let’s say that we are modeling a relationship of people to their
supervisors: an employee may have a single supervisor.Let’s say further that employee names are unique and can be used to
represent employees. We can use names as our “tokens”. Tokens are
similar to the primary key in a relational database, or in intid or
keyreference in Zope 3–some way to uniquely identify an object, which
sorts reliably and can be resolved to the object given the right context.>>> from __future__ import print_function
>>> from functools import total_ordering
>>> employees = {} # we'll use this to resolve the "name" tokens
>>> @total_ordering
... class Employee(object):
... def __init__(self, name, supervisor=None):
... if name in employees:
... raise ValueError('employee with same name already exists')
... self.name = name # expect this to be readonly
... self.supervisor = supervisor
... employees[name] = self
... def __repr__(self): # to make the tests prettier...
... return '<' + self.name + '>'
... def __eq__(self, other):
... return self is other
... def __lt__(self, other): # to make the tests prettier...
... # pukes if other doesn't have name
... return self.name < other.name
...So, we need to define how to turn employees into their tokens. That’s
trivial. (We explain the arguments to this function in detail below,
but for now we’re aiming for “breezy overview”.)>>> def dumpEmployees(emp, index, cache):
... return emp.name
...We also need a way to turn tokens into employees. We use our dict for that.>>> def loadEmployees(token, index, cache):
... return employees[token]
...We also need a way to tell the index to find the supervisor for indexing:>>> def supervisor(emp, index):
... return emp.supervisor # None or another employee
...Now we have enough to get started with an index. The first argument to
Index is the attributes to index: we pass thesupervisorfunction
(which is also used in this case to define the index’s name, since we do
not pass one explicitly), the dump and load functions, and a BTree
module that specifies sets that can hold our tokens (OO or OL should
also work). As keyword arguments, we tell the index how to dump and
load our relationship tokens–the same functions in this case–and what
a reasonable BTree module is for sets (again, we choose OI, but OO or OL
should work).>>> from zc.relationship import index
>>> import BTrees
>>> ix = index.Index(
... ({'callable': supervisor, 'dump': dumpEmployees,
... 'load': loadEmployees, 'btree': BTrees.family32.OI},),
... dumpRel=dumpEmployees, loadRel=loadEmployees,
... relFamily=BTrees.family32.OI)Now let’s create a few employees.>>> a = Employee('Alice')
>>> b = Employee('Betty', a)
>>> c = Employee('Chuck', a)
>>> d = Employee('Duane', b)
>>> e = Employee('Edgar', b)
>>> f = Employee('Frank', c)
>>> g = Employee('Grant', c)
>>> h = Employee('Howie', d)In a diagram style with which you will become familiar if you make it to
the end of this document, let’s show the hierarchy.Alice
__/ \__
Betty Chuck
/ \ / \
Duane Edgar Frank Grant
|
HowieSo who works for Alice? To ask the index, we need to tell it about them.>>> for emp in (a,b,c,d,e,f,g,h):
... ix.index(emp)
...Now we can ask. We always need to ask with tokens. The index provides
a method to try and make this more convenient:tokenizeQuery[1].[1]You can also resolve queries.>>> ix.resolveQuery({None: 'Alice'})
{None: <Alice>}
>>> ix.resolveQuery({'supervisor': 'Alice'})
{'supervisor': <Alice>}The spelling of the query is described in more detail
later, but the idea is simply that keys in a dictionary specify
attribute names, and the values specify the constraints.>>> t = ix.tokenizeQuery
>>> sorted(ix.findRelationshipTokens(t({'supervisor': a})))
['Betty', 'Chuck']
>>> sorted(ix.findRelationships(t({'supervisor': a})))
[<Betty>, <Chuck>]How do we find what the employee’s supervisor is? Well, in this case,
look at the attribute! If you can use an attribute that will usually be
a win in the ZODB. If you want to look at the data in the index,
though, that’s easy enough. Who is Howie’s supervisor? The None key in
the query indicates that we are matching against the relationship token
itself[2].[2]You can search for relations that haven’t been indexed.>>> list(ix.findRelationshipTokens({None: 'Ygritte'}))
[]You can also combine searches with None, just for completeness.>>> list(ix.findRelationshipTokens({None: 'Alice', 'supervisor': None}))
['Alice']
>>> list(ix.findRelationshipTokens({None: 'Alice', 'supervisor': 'Betty'}))
[]
>>> list(ix.findRelationshipTokens({None: 'Betty', 'supervisor': 'Alice'}))
['Betty']>>> h.supervisor
<Duane>
>>> list(ix.findValueTokens('supervisor', t({None: h})))
['Duane']
>>> list(ix.findValues('supervisor', t({None: h})))
[<Duane>]What about transitive searching? Well, you need to tell the index how to
walk the tree. In simple cases like this, the index’s
TransposingTransitiveQueriesFactory will do the trick. We just want to tell
the factory to transpose the two keys, None and ‘supervisor’. We can then use
it in queries for transitive searches.>>> factory = index.TransposingTransitiveQueriesFactory(None, 'supervisor')Who are all of Howie’s supervisors transitively (this looks up in the
diagram)?>>> list(ix.findValueTokens('supervisor', t({None: h}),
... transitiveQueriesFactory=factory))
['Duane', 'Betty', 'Alice']
>>> list(ix.findValues('supervisor', t({None: h}),
... transitiveQueriesFactory=factory))
[<Duane>, <Betty>, <Alice>]Who are all of the people Betty supervises transitively, breadth first (this
looks down in the diagram)?>>> people = list(ix.findRelationshipTokens(
... t({'supervisor': b}), transitiveQueriesFactory=factory))
>>> sorted(people[:2])
['Duane', 'Edgar']
>>> people[2]
'Howie'
>>> people = list(ix.findRelationships(
... t({'supervisor': b}), transitiveQueriesFactory=factory))
>>> sorted(people[:2])
[<Duane>, <Edgar>]
>>> people[2]
<Howie>This transitive search is really the only transitive factory you would want
here, so it probably is safe to wire it in as a default. While most
attributes on the index must be set at instantiation, this happens to be one
we can set after the fact.>>> ix.defaultTransitiveQueriesFactory = factoryNow all searches are transitive.>>> list(ix.findValueTokens('supervisor', t({None: h})))
['Duane', 'Betty', 'Alice']
>>> list(ix.findValues('supervisor', t({None: h})))
[<Duane>, <Betty>, <Alice>]
>>> people = list(ix.findRelationshipTokens(t({'supervisor': b})))
>>> sorted(people[:2])
['Duane', 'Edgar']
>>> people[2]
'Howie'
>>> people = list(ix.findRelationships(t({'supervisor': b})))
>>> sorted(people[:2])
[<Duane>, <Edgar>]
>>> people[2]
<Howie>We can force a non-transitive search, or a specific search depth, with
maxDepth[3].[3]A search with a maxDepth > 1 but
no transitiveQueriesFactory raises an error.>>> ix.defaultTransitiveQueriesFactory = None
>>> ix.findRelationshipTokens({'supervisor': 'Duane'}, maxDepth=3)
Traceback (most recent call last):
...
ValueError: if maxDepth not in (None, 1), queryFactory must be available>>> ix.defaultTransitiveQueriesFactory = factory>>> list(ix.findValueTokens('supervisor', t({None: h}), maxDepth=1))
['Duane']
>>> list(ix.findValues('supervisor', t({None: h}), maxDepth=1))
[<Duane>]
>>> sorted(ix.findRelationshipTokens(t({'supervisor': b}), maxDepth=1))
['Duane', 'Edgar']
>>> sorted(ix.findRelationships(t({'supervisor': b}), maxDepth=1))
[<Duane>, <Edgar>]Transitive searches can handle recursive loops and have other features as
discussed in the larger example and the interface.Our last two introductory examples show off three other methods:isLinkedfindRelationshipTokenChainsandfindRelationshipChains.isLinked lets you answer whether two queries are linked. Is Alice a
supervisor of Howie? What about Chuck? (Note that, if your
relationships describe a hierarchy, searching up a hierarchy is usually
more efficient, so the second pair of questions is generally preferable
to the first in that case.)>>> ix.isLinked(t({'supervisor': a}), targetQuery=t({None: h}))
True
>>> ix.isLinked(t({'supervisor': c}), targetQuery=t({None: h}))
False
>>> ix.isLinked(t({None: h}), targetQuery=t({'supervisor': a}))
True
>>> ix.isLinked(t({None: h}), targetQuery=t({'supervisor': c}))
FalsefindRelationshipTokenChainsandfindRelationshipChainshelp you discoverhowthings are transitively related. A “chain” is a transitive path of
relationships. For instance, what’s the chain of command between Alice and
Howie?>>> list(ix.findRelationshipTokenChains(
... t({'supervisor': a}), targetQuery=t({None: h})))
[('Betty', 'Duane', 'Howie')]
>>> list(ix.findRelationshipChains(
... t({'supervisor': a}), targetQuery=t({None: h})))
[(<Betty>, <Duane>, <Howie>)]This gives you a quick overview of the basic index features. This should be
enough to get you going. Now we’ll dig in some more, if you want to know the
details.Starting the N-Way ExamplesTo exercise the index further, we’ll come up with a somewhat complex
relationship to index. Let’s say we are modeling a generic set-up like
SUBJECT RELATIONSHIPTYPE OBJECT in CONTEXT. This could let you let
users define relationship types, then index them on the fly. The
context can be something like a project, so we could say“Fred” “has the role of” “Project Manager” on the “zope.org redesign project”.Mapped to the parts of the relationship object, that’s[“Fred” (SUBJECT)] [“has the role of” (RELATIONSHIPTYPE)]
[“Project Manager” (OBJECT)] on the [“zope.org redesign project” (CONTEXT)].Without the context, you can still do interesting things like[“Ygritte” (SUBJECT)] [“manages” (RELATIONSHIPTYPE)] [“Uther” (OBJECT)]In our new example, we’ll leverage the fact that the index can accept
interface attributes to index. So let’s define a basic interface
without the context, and then an extended interface with the context.>>> from zope import interface
>>> class IRelationship(interface.Interface):
... subjects = interface.Attribute(
... 'The sources of the relationship; the subject of the sentence')
... relationshiptype = interface.Attribute(
... '''unicode: the single relationship type of this relationship;
... usually contains the verb of the sentence.''')
... objects = interface.Attribute(
... '''the targets of the relationship; usually a direct or
... indirect object in the sentence''')
...
>>> class IContextAwareRelationship(IRelationship):
... def getContext():
... '''return a context for the relationship'''
...Now we’ll create an index. To do that, we must minimally pass in an
iterable describing the indexed values. Each item in the iterable must
either be an interface element (a zope.interface.Attribute or
zope.interface.Method associated with an interface, typically obtained
using a spelling likeIRelationship[‘subjects’]) or a dict. Each dict
must have either the ‘element’ key, which is the interface element to be
indexed; or the ‘callable’ key, which is the callable shown in the
simpler, introductory example above[4].[4]instantiating an index with a dictionary containing
both the ‘element’ and the ‘callable’ key is an error:>>> def subjects(obj, index, cache):
... return obj.subjects
...
>>> ix = index.Index(
... ({'element': IRelationship['subjects'],
... 'callable': subjects, 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
Traceback (most recent call last):
...
ValueError: cannot provide both callable and elementWhile we’re at it, as you might expect, you must provide one of them.>>> ix = index.Index(
... ({'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
Traceback (most recent call last):
...
ValueError: must provide element or callableIt then
can contain other keys to override the default indexing behavior for the
element.The element’s or callable’s __name__ will be used to refer to this
element in queries, unless the dict has a ‘name’ key, which must be a
non-empty string[5].[5]It’s possible to pass a callable without a name, in which
case you must explicitly specify a name.>>> @total_ordering
... class AttrGetter(object):
... def __init__(self, attr):
... self.attr = attr
... def __eq__(self, other):
... return self is other
... def __lt__(self, other):
... return self.attr < getattr(other, 'attr', other)
... def __call__(self, obj, index, cache):
... return getattr(obj, self.attr, None)
...
>>> subjects = AttrGetter('subjects')
>>> ix = index.Index(
... ({'callable': subjects, 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
Traceback (most recent call last):
...
ValueError: no name specified
>>> ix = index.Index(
... ({'callable': subjects, 'multiple': True, 'name': subjects},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))It’s also an error to specify the same name or element twice,
however you do it.>>> ix = index.Index(
... ({'callable': subjects, 'multiple': True, 'name': 'objects'},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ('name already used', 'objects')>>> ix = index.Index(
... ({'callable': subjects, 'multiple': True, 'name': 'subjects'},
... IRelationship['relationshiptype'],
... {'callable': subjects, 'multiple': True, 'name': 'objects'},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: ('element already indexed',
<zc.relationship.README.AttrGetter object at ...>)>>> ix = index.Index(
... ({'element': IRelationship['objects'], 'multiple': True,
... 'name': 'subjects'},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: ('element already indexed',
<zope.interface.interface.Attribute object at ...>)The element is assumed to be a single value, unless the dict has a ‘multiple’
key with a value equivalent True. In our example, “subjects” and “objects” are
potentially multiple values, while “relationshiptype” and “getContext” are
single values.By default, the values for the element will be tokenized and resolved using an
intid utility, and stored in a BTrees.IFBTree. This is a good choice if you
want to make object tokens easily mergable with typical Zope 3 catalog
results. If you need different behavior for any element, you can specify
three keys per dict:‘dump’, the tokenizer, a callable taking (obj, index, cache) and returning a
token;‘load’ the token resolver, a callable taking (token, index, cache) to return
the object which the token represents; and‘btree’, the btree module to use to store and process the tokens, such as
BTrees.OOBTree.If you provide a custom ‘dump’ you will almost certainly need to provide a
custom ‘load’; and if your tokens are not integers then you will need to
specify a different ‘btree’ (either BTrees.OOBTree or BTrees.OIBTree, as of
this writing).The tokenizing function (‘dump’)mustreturn homogenous, immutable tokens:
that is, any given tokenizer should only return tokens that sort
unambiguously, across Python versions, which usually mean that they are all of
the same type. For instance, a tokenizer should only return ints, or only
return strings, or only tuples of strings, and so on. Different tokenizers
used for different elements in the same index may return different types. They
also may return the same value as the other tokenizers to mean different
objects: the stores are separate.Note that both dump and load may also be explicitly None in the dictionary:
this will mean that the values are already appropriate to be used as tokens.
It enables an optimization described in theOptimizing relationship index usesection[6].[6]It is not allowed to provide only one or the other of
‘load’ and ‘dump’.>>> ix = index.Index(
... ({'element': IRelationship['subjects'], 'multiple': True,
... 'name': 'subjects','dump': None},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: either both of 'dump' and 'load' must be None, or neither>>> ix = index.Index(
... ({'element': IRelationship['objects'], 'multiple': True,
... 'name': 'subjects','load': None},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: either both of 'dump' and 'load' must be None, or neitherIn addition to the one required argument to the class, the signature contains
four optional arguments. The ‘defaultTransitiveQueriesFactory’ is the next,
and allows you to specify a callable as described in
interfaces.ITransitiveQueriesFactory. Without it transitive searches will
require an explicit factory every time, which can be tedious. The index
package provides a simple implementation that supports transitive searches
following two indexed elements (TransposingTransitiveQueriesFactory) and this
document describes more complex possible transitive behaviors that can be
modeled. For our example, “subjects” and “objects” are the default transitive
fields, so if Ygritte (SUBJECT) manages Uther (OBJECT), and Uther (SUBJECT)
manages Emily (OBJECT), a search for all those transitively managed by Ygritte
will transpose Uther from OBJECT to SUBJECT and find that Uther manages Emily.
Similarly, to find all transitive managers of Emily, Uther will change place
from SUBJECT to OBJECT in the search[7].[7]The factory lets you specify two
names, which are transposed for transitive walks. This is usually what
you want for a hierarchy and similar variations: as the text describes
later, more complicated traversal might be desired in more complicated
relationships, as found in genealogy.It supports both transposing values and relationship tokens, as seen in
the text.In this footnote, we’ll explore the factory in the small, with index
stubs.>>> factory = index.TransposingTransitiveQueriesFactory(
... 'subjects', 'objects')
>>> class StubIndex(object):
... def findValueTokenSet(self, rel, name):
... return {
... ('foo', 'objects'): ('bar',),
... ('bar', 'subjects'): ('foo',)}[(rel, name)]
...
>>> ix = StubIndex()
>>> list(factory(['foo'], {'subjects': 'foo'}, ix, {}))
[{'subjects': 'bar'}]
>>> list(factory(['bar'], {'objects': 'bar'}, ix, {}))
[{'objects': 'foo'}]If you specify both fields then it won’t transpose.>>> list(factory(['foo'], {'objects': 'bar', 'subjects': 'foo'}, ix, {}))
[]If you specify additional fields then it keeps them statically.>>> list(factory(['foo'], {'subjects': 'foo', 'getContext': 'shazam'},
... ix, {})) == [{'subjects': 'bar', 'getContext': 'shazam'}]
TrueThe next three arguments, ‘dumpRel’, ‘loadRel’ and ‘relFamily’, have
to do with the relationship tokens. The default values assume that you will
be using intid tokens for the relationships, and so ‘dumpRel’ and
‘loadRel’ tokenize and resolve, respectively, using the intid utility; and
‘relFamily’ defaults to BTrees.IFBTree.If relationship tokens (from ‘findRelationshipChains’ or ‘apply’ or
‘findRelationshipTokenSet’, or in a filter to most of the search methods) are
to be merged with other catalog results, relationship tokens should be based
on intids, as in the default. For instance, if some relationships are only
available to some users on the basis of security, and you keep an index of
this, then you will want to use a filter based on the relationship tokens
viewable by the current user as kept by the catalog index.If you are unable or unwilling to use intid relationship tokens, tokens must
still be homogenous and immutable as described above for indexed values tokens.The last argument is ‘family’, which effectively defaults to BTrees.family32.
If you don’t expicitly specify BTree modules for your value and relationship
sets, this value will determine whether you use the 32 bit or the 64 bit
IFBTrees[8].[8]Here’s an example of specifying the family64. This is a “white
box” demonstration that looks at some of the internals.>>> ix = index.Index( # 32 bit default
... ({'element': IRelationship['subjects'], 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
>>> ix._relTools['BTree'] is BTrees.family32.IF.BTree
True
>>> ix._attrs['subjects']['BTree'] is BTrees.family32.IF.BTree
True
>>> ix._attrs['objects']['BTree'] is BTrees.family32.IF.BTree
True
>>> ix._attrs['getContext']['BTree'] is BTrees.family32.IF.BTree
True>>> ix = index.Index( # explicit 32 bit
... ({'element': IRelationship['subjects'], 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'),
... family=BTrees.family32)
>>> ix._relTools['BTree'] is BTrees.family32.IF.BTree
True
>>> ix._attrs['subjects']['BTree'] is BTrees.family32.IF.BTree
True
>>> ix._attrs['objects']['BTree'] is BTrees.family32.IF.BTree
True
>>> ix._attrs['getContext']['BTree'] is BTrees.family32.IF.BTree
True>>> ix = index.Index( # explicit 64 bit
... ({'element': IRelationship['subjects'], 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'),
... family=BTrees.family64)
>>> ix._relTools['BTree'] is BTrees.family64.IF.BTree
True
>>> ix._attrs['subjects']['BTree'] is BTrees.family64.IF.BTree
True
>>> ix._attrs['objects']['BTree'] is BTrees.family64.IF.BTree
True
>>> ix._attrs['getContext']['BTree'] is BTrees.family64.IF.BTree
TrueIf we had an IIntId utility registered and wanted to use the defaults, then
instantiation of an index for our relationship would look like this:>>> ix = index.Index(
... ({'element': IRelationship['subjects'], 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))That’s the simple case. With relatively little fuss, we have an IIndex, and a
defaultTransitiveQueriesFactory, implementing ITransitiveQueriesFactory, that
switches subjects and objects as described above.>>> from zc.relationship import interfaces
>>> from zope.interface.verify import verifyObject
>>> verifyObject(interfaces.IIndex, ix)
True
>>> verifyObject(
... interfaces.ITransitiveQueriesFactory,
... ix.defaultTransitiveQueriesFactory)
TrueFor the purposes of a more complex example, though, we are going to exercise
more of the index’s options–we’ll use at least one of ‘name’, ‘dump’, ‘load’,
and ‘btree’.‘subjects’ and ‘objects’ will use a custom integer-based token generator.
They will share tokens, which will let us use the default
TransposingTransitiveQueriesFactory. We can keep using the IFBTree sets,
because the tokens are still integers.‘relationshiptype’ will use a name ‘reltype’ and will just use the unicode
value as the token, without translation but with a registration check.‘getContext’ will use a name ‘context’ but will continue to use the intid
utility and use the names from their interface. We will see later that
making transitive walks between different token sources must be handled with
care.We will also use the intid utility to resolve relationship tokens. See the
relationship container (and container.rst) for examples of changing the
relationship type, especially in keyref.py.Here are the methods we’ll use for the ‘subjects’ and ‘objects’ tokens,
followed by the methods we’ll use for the ‘relationshiptypes’ tokens.>>> lookup = {}
>>> counter = [0]
>>> prefix = '_z_token__'
>>> def dump(obj, index, cache):
... assert (interfaces.IIndex.providedBy(index) and
... isinstance(cache, dict)), (
... 'did not receive correct arguments')
... token = getattr(obj, prefix, None)
... if token is None:
... token = counter[0]
... counter[0] += 1
... if counter[0] >= 2147483647:
... raise RuntimeError("Whoa! That's a lot of ids!")
... assert token not in lookup
... setattr(obj, prefix, token)
... lookup[token] = obj
... return token
...
>>> def load(token, index, cache):
... assert (interfaces.IIndex.providedBy(index) and
... isinstance(cache, dict)), (
... 'did not receive correct arguments')
... return lookup[token]
...
>>> relTypes = []
>>> def relTypeDump(obj, index, cache):
... assert obj in relTypes, 'unknown relationshiptype'
... return obj
...
>>> def relTypeLoad(token, index, cache):
... assert token in relTypes, 'unknown relationshiptype'
... return token
...Note that these implementations are completely silly if we actually cared about
ZODB-based persistence: to even make it half-acceptable we should make the
counter, lookup, and and relTypes persistently stored somewhere using a
reasonable persistent data structure. This is just a demonstration example.Now we can make an index.As in our initial example, we are going to use the simple transitive query
factory defined in the index module for our default transitive behavior: when
you want to do transitive searches, transpose ‘subjects’ with ‘objects’ and
keep everything else; and if both subjects and objects are provided, don’t do
any transitive search.>>> from BTrees import OIBTree # could also be OOBTree
>>> ix = index.Index(
... ({'element': IRelationship['subjects'], 'multiple': True,
... 'dump': dump, 'load': load},
... {'element': IRelationship['relationshiptype'],
... 'dump': relTypeDump, 'load': relTypeLoad, 'btree': OIBTree,
... 'name': 'reltype'},
... {'element': IRelationship['objects'], 'multiple': True,
... 'dump': dump, 'load': load},
... {'element': IContextAwareRelationship['getContext'],
... 'name': 'context'}),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))We’ll want to put the index somewhere in the system so it can find the intid
utility. We’ll add it as a utility just as part of the example. As long as
the index has a valid __parent__ that is itself connected transitively to a
site manager with the desired intid utility, everything should work fine, so
no need to install it as utility. This is just an example.>>> from zope import interface
>>> sm = app.getSiteManager()
>>> sm['rel_index'] = ix
>>> import zope.interface.interfaces
>>> registry = zope.interface.interfaces.IComponentRegistry(sm)
>>> registry.registerUtility(ix, interfaces.IIndex)
>>> import transaction
>>> transaction.commit()Now we’ll create some representative objects that we can relate, and create
and index our first example relationship.In the example, note that the context will only be available as an adapter to
ISpecialRelationship objects: the index tries to adapt objects to the
appropriate interface, and considers the value to be empty if it cannot adapt.>>> import persistent
>>> from zope.app.container.contained import Contained
>>> class Base(persistent.Persistent, Contained):
... def __init__(self, name):
... self.name = name
... def __repr__(self):
... return '<%s %r>' % (self.__class__.__name__, self.name)
...
>>> class Person(Base): pass
...
>>> class Role(Base): pass
...
>>> class Project(Base): pass
...
>>> class Company(Base): pass
...
>>> @interface.implementer(IRelationship)
... class Relationship(persistent.Persistent, Contained):
... def __init__(self, subjects, relationshiptype, objects):
... self.subjects = subjects
... assert relationshiptype in relTypes
... self.relationshiptype = relationshiptype
... self.objects = objects
... def __repr__(self):
... return '<%r %s %r>' % (
... self.subjects, self.relationshiptype, self.objects)
...
>>> class ISpecialRelationship(interface.Interface):
... pass
...
>>> from zope import component
>>> @component.adapter(ISpecialRelationship)
... @interface.implementer(IContextAwareRelationship)
... class ContextRelationshipAdapter(object):
... def __init__(self, adapted):
... self.adapted = adapted
... def getContext(self):
... return getattr(self.adapted, '_z_context__', None)
... def setContext(self, value):
... self.adapted._z_context__ = value
... def __getattr__(self, name):
... return getattr(self.adapted, name)
...
>>> component.provideAdapter(ContextRelationshipAdapter)
>>> @interface.implementer(ISpecialRelationship)
... class SpecialRelationship(Relationship):
... pass
...
>>> people = {}
>>> for p in ['Abe', 'Bran', 'Cathy', 'David', 'Emily', 'Fred', 'Gary',
... 'Heather', 'Ingrid', 'Jim', 'Karyn', 'Lee', 'Mary',
... 'Nancy', 'Olaf', 'Perry', 'Quince', 'Rob', 'Sam', 'Terry',
... 'Uther', 'Van', 'Warren', 'Xen', 'Ygritte', 'Zane']:
... app[p] = people[p] = Person(p)
...
>>> relTypes.extend(
... ['has the role of', 'manages', 'taught', 'commissioned'])
>>> roles = {}
>>> for r in ['Project Manager', 'Software Engineer', 'Designer',
... 'Systems Administrator', 'Team Leader', 'Mascot']:
... app[r] = roles[r] = Role(r)
...
>>> projects = {}
>>> for p in ['zope.org redesign', 'Zope 3 manual',
... 'improved test coverage', 'Vault design and implementation']:
... app[p] = projects[p] = Project(p)
...
>>> companies = {}
>>> for c in ['Ynod Corporation', 'HAL, Inc.', 'Zookd']:
... app[c] = companies[c] = Company(c)
...>>> app['fredisprojectmanager'] = rel = SpecialRelationship(
... (people['Fred'],), 'has the role of', (roles['Project Manager'],))
>>> IContextAwareRelationship(rel).setContext(
... projects['zope.org redesign'])
>>> ix.index(rel)
>>> transaction.commit()Token conversionBefore we examine the searching features, we should quickly discuss the
tokenizing API on the index. All search queries must use value tokens, and
search results can sometimes be value or relationship tokens. Therefore
converting between tokens and real values can be important. The index
provides a number of conversion methods for this purpose.Arguably the most important istokenizeQuery: it takes a query, in which
each key and value are the name of an indexed value and an actual value,
respectively; and returns a query in which the actual values have been
converted to tokens. For instance, consider the following example. It’s a
bit hard to show the conversion reliably (we can’t know what the intid tokens
will be, for instance) so we just show that the result’s values are tokenized
versions of the inputs.>>> res = ix.tokenizeQuery(
... {'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})
>>> res['objects'] == dump(roles['Project Manager'], ix, {})
True
>>> from zope.app.intid.interfaces import IIntIds
>>> intids = component.getUtility(IIntIds, context=ix)
>>> res['context'] == intids.getId(projects['zope.org redesign'])
TrueTokenized queries can be resolved to values again using resolveQuery.>>> sorted(ix.resolveQuery(res).items()) # doctest: +NORMALIZE_WHITESPACE
[('context', <Project 'zope.org redesign'>),
('objects', <Role 'Project Manager'>)]Other useful conversions aretokenizeValues, which returns an iterable of
tokens for the values of the given index name;>>> examples = (people['Abe'], people['Bran'], people['Cathy'])
>>> res = list(ix.tokenizeValues(examples, 'subjects'))
>>> res == [dump(o, ix, {}) for o in examples]
TrueresolveValueTokens, which returns an iterable of values for the tokens of
the given index name;>>> list(ix.resolveValueTokens(res, 'subjects'))
[<Person 'Abe'>, <Person 'Bran'>, <Person 'Cathy'>]tokenizeRelationship, which returns a token for the given relationship;>>> res = ix.tokenizeRelationship(rel)
>>> res == intids.getId(rel)
TrueresolveRelationshipToken, which returns a relationship for the given token;>>> ix.resolveRelationshipToken(res) is rel
TruetokenizeRelationships, which returns an iterable of tokens for the relations
given; and>>> app['another_rel'] = another_rel = Relationship(
... (companies['Ynod Corporation'],), 'commissioned',
... (projects['Vault design and implementation'],))
>>> res = list(ix.tokenizeRelationships((another_rel, rel)))
>>> res == [intids.getId(r) for r in (another_rel, rel)]
TrueresolveRelationshipTokens, which returns an iterable of relations for the
tokens given.>>> list(ix.resolveRelationshipTokens(res)) == [another_rel, rel]
TrueBasic searchingNow we move to the meat of the interface: searching. The index interface
defines several searching methods:findValuesandfindValueTokensask “to what is this related?”;findRelationshipChainsandfindRelationshipTokenChainsask “how is this
related?”, especially for transitive searches;isLinkedasks “does a relationship like this exist?”;findRelationshipTokenSetasks “what are the intransitive relationships
that match my query?” and is particularly useful for low-level usage of the
index data structures;findRelationshipsasks the same question, but returns an iterable of
relationships rather than a set of tokens;findValueTokenSetasks “what are the value tokens for this particular
indexed name and this relationship token?” and is useful for low-level
usage of the index data structures such as transitive query factories; andthe standard zope.index methodapplyessentially exposes thefindRelationshipTokenSetandfindValueTokensmethods via a query object
spelling.findRelationshipChainsandfindRelationshipTokenChainsare paired methods,
doing the same work but with and without resolving the resulting tokens; andfindValuesandfindValueTokensare also paired in the same way.It is very important to note that all queries must use tokens, not actual
objects. As introduced above, the index provides a method to ease that
requirement, in the form of atokenizeQuerymethod that converts a dict with
objects to a dict with tokens. You’ll see below that we shorten our calls by
stashingtokenizeQueryaway in the ‘q’ name.>>> q = ix.tokenizeQueryWe have indexed our first example relationship–“Fred has the role of project
manager in the zope.org redesign”–so we can search for it. We’ll first look
atfindValuesandfindValueTokens. Here, we ask ‘who has the role of
project manager in the zope.org redesign?’. We do it first with findValues
and then with findValueTokens[9].[9]findValueTokensandfindValuesraise errors if
you try to get a value that is not indexed.>>> list(ix.findValues(
... 'folks',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})))
Traceback (most recent call last):
...
ValueError: ('name not indexed', 'folks')>>> list(ix.findValueTokens(
... 'folks',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})))
Traceback (most recent call last):
...
ValueError: ('name not indexed', 'folks')>>> list(ix.findValues(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})))
[<Person 'Fred'>]>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']}))]
[<Person 'Fred'>]If you don’t pass a query to these methods, you get all indexed values for the
given name in a BTree (don’t modify this! this is an internal data structure–
we pass it out directly because you can do efficient things with it with BTree
set operations). In this case, we’ve only indexed a single relationship,
so its subjects are the subjects in this result.>>> res = ix.findValueTokens('subjects', maxDepth=1)
>>> res # doctest: +ELLIPSIS
<BTrees.IOBTree.IOBTree object at ...>
>>> [load(t, ix, {}) for t in res]
[<Person 'Fred'>]If we want to find all the relationships for which Fred is a subject, we can
usefindRelationshipTokenSet. It, combined withfindValueTokenSet, is
useful for querying the index data structures at a fairly low level, when you
want to use the data in a way that the other search methods don’t support.findRelationshipTokenSet, given a single dictionary of {indexName: token},
returns a set (based on the btree family for relationships in the index) of
relationship tokens that match it, intransitively.>>> res = ix.findRelationshipTokenSet(q({'subjects': people['Fred']}))
>>> res # doctest: +ELLIPSIS
<BTrees.IFBTree.IFTreeSet object at ...>
>>> [intids.getObject(t) for t in res]
[<(<Person 'Fred'>,) has the role of (<Role 'Project Manager'>,)>]It is in fact equivalent tofindRelationshipTokenscalled without
transitivity and without any filtering.>>> res2 = ix.findRelationshipTokens(
... q({'subjects': people['Fred']}), maxDepth=1)
>>> res2 is res
TrueThefindRelationshipTokenSetmethod always returns a set, even if the
query does not have any results.>>> res = ix.findRelationshipTokenSet(q({'subjects': people['Ygritte']}))
>>> res # doctest: +ELLIPSIS
<BTrees.IFBTree.IFTreeSet object at ...>
>>> list(res)
[]An empty query returns all relationships in the index (this is true of other
search methods as well).>>> res = ix.findRelationshipTokenSet({})
>>> res # doctest: +ELLIPSIS
<BTrees.IFBTree.IFTreeSet object at ...>
>>> len(res) == ix.documentCount()
True
>>> for r in ix.resolveRelationshipTokens(res):
... if r not in ix:
... print('oops')
... break
... else:
... print('correct')
...
correctfindRelationshipscan do the same thing but with resolving the relationships.>>> list(ix.findRelationships(q({'subjects': people['Fred']})))
[<(<Person 'Fred'>,) has the role of (<Role 'Project Manager'>,)>]However, likefindRelationshipTokensand unlikefindRelationshipTokenSet,findRelationshipscan be used
transitively, as shown in the introductory section of this document.findValueTokenSet, given a relationship token and a value name, returns a
set (based on the btree family for the value) of value tokens for that
relationship.>>> src = ix.findRelationshipTokenSet(q({'subjects': people['Fred']}))>>> res = ix.findValueTokenSet(list(src)[0], 'subjects')
>>> res # doctest: +ELLIPSIS
<BTrees.IFBTree.IFTreeSet object at ...>
>>> [load(t, ix, {}) for t in res]
[<Person 'Fred'>]LikefindRelationshipTokenSetandfindRelationshipTokens,findValueTokenSetis equivalent tofindValueTokenswithout a
transitive search or filtering.>>> res2 = ix.findValueTokenSet(list(src)[0], 'subjects')
>>> res2 is res
TrueThe apply method, part of the zope.index.interfaces.IIndexSearch interface,
can essentially only duplicate thefindValueTokensandfindRelationshipTokenSetsearch calls. The only additional functionality
is that the results always are IFBTree sets: if the tokens requested are not
in an IFBTree set (on the basis of the ‘btree’ key during instantiation, for
instance) then the index raises a ValueError. A wrapper dict specifies the
type of search with the key, and the value should be the arguments for the
search.Here, we ask for the current known roles on the zope.org redesign.>>> res = ix.apply({'values':
... {'resultName': 'objects', 'query':
... q({'reltype': 'has the role of',
... 'context': projects['zope.org redesign']})}})
>>> res # doctest: +ELLIPSIS
IFSet([...])
>>> [load(t, ix, {}) for t in res]
[<Role 'Project Manager'>]Ideally, this would fail, because the tokens, while integers, are not actually
mergable with a intid-based catalog results. However, the index only complains
if it can tell that the returning set is not an IFTreeSet or IFSet.Here, we ask for the relationships that have the ‘has the role of’ type.>>> res = ix.apply({'relationships':
... q({'reltype': 'has the role of'})})
>>> res # doctest: +ELLIPSIS
<BTrees.IFBTree.IFTreeSet object at ...>
>>> [intids.getObject(t) for t in res]
[<(<Person 'Fred'>,) has the role of (<Role 'Project Manager'>,)>]Here, we ask for the known relationships types for the zope.org redesign. It
will fail, because the result cannot be expressed as an IFBTree.IFTreeSet.>>> res = ix.apply({'values':
... {'resultName': 'reltype', 'query':
... q({'context': projects['zope.org redesign']})}})
... # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: cannot fulfill `apply` interface because cannot return an
(I|L)FBTree-based resultThe same kind of error will be raised if you request relationships and the
relationships are not stored in IFBTree or LFBTree structures[10].[10]Only one key may be in the dictionary.>>> res = ix.apply({'values':
... {'resultName': 'objects', 'query':
... q({'reltype': 'has the role of',
... 'context': projects['zope.org redesign']})},
... 'relationships': q({'reltype': 'has the role of'})})
Traceback (most recent call last):
...
ValueError: one key in the primary query dictionaryThe keys must be one of ‘values’ or ‘relationships’.>>> res = ix.apply({'kumquats':
... {'resultName': 'objects', 'query':
... q({'reltype': 'has the role of',
... 'context': projects['zope.org redesign']})}})
Traceback (most recent call last):
...
ValueError: ('unknown query type', 'kumquats')If a relationship uses LFBTrees, searches are fine.>>> ix2 = index.Index( # explicit 64 bit
... ({'element': IRelationship['subjects'], 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'),
... family=BTrees.family64)>>> list(ix2.apply({'values':
... {'resultName': 'objects', 'query':
... q({'subjects': people['Gary']})}}))
[]>>> list(ix2.apply({'relationships':
... q({'subjects': people['Gary']})}))
[]But, as with shown in the main text for values, if you are using another
BTree module for relationships, you’ll get an error.>>> ix2 = index.Index( # explicit 64 bit
... ({'element': IRelationship['subjects'], 'multiple': True},
... IRelationship['relationshiptype'],
... {'element': IRelationship['objects'], 'multiple': True},
... IContextAwareRelationship['getContext']),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'),
... relFamily=BTrees.OIBTree)>>> list(ix2.apply({'relationships':
... q({'subjects': people['Gary']})}))
Traceback (most recent call last):
...
ValueError: cannot fulfill `apply` interface because cannot return an (I|L)FBTree-based resultThe last basic search methods,isLinked,findRelationshipTokenChains, andfindRelationshipChains, are most useful for transitive searches. We
have not yet created any relationships that we can use transitively. They
still will work with intransitive searches, so we will demonstrate them here
as an introduction, then discuss them more below when we introduce transitive
relationships.findRelationshipChainsandfindRelationshipTokenChainslet you find
transitive relationship paths. Right now a single relationship–a single
point–can’t create much of a line. So first, here’s a somewhat useless
example:>>> [[intids.getObject(t) for t in path] for path in
... ix.findRelationshipTokenChains(
... q({'reltype': 'has the role of'}))]
... # doctest: +NORMALIZE_WHITESPACE
[[<(<Person 'Fred'>,) has the role of (<Role 'Project Manager'>,)>]]That’s useless, because there’s no chance of it being a transitive search, and
so you might as well use findRelationshipTokenSet. This will become more
interesting later on.Here’s the same example with findRelationshipChains, which resolves the
relationship tokens itself.>>> list(ix.findRelationshipChains(q({'reltype': 'has the role of'})))
... # doctest: +NORMALIZE_WHITESPACE
[(<(<Person 'Fred'>,) has the role of (<Role 'Project Manager'>,)>,)]isLinkedreturns a boolean if there is at least one path that matches the
search–in fact, the implementation is essentiallytry:
iter(ix.findRelationshipTokenChains(...args...)).next()
except StopIteration:
return False
else:
return TrueSo, we can say>>> ix.isLinked(q({'subjects': people['Fred']}))
True
>>> ix.isLinked(q({'subjects': people['Gary']}))
False
>>> ix.isLinked(q({'subjects': people['Fred'],
... 'reltype': 'manages'}))
FalseThis is reasonably useful as is, to test basic assertions. It also works with
transitive searches, as we will see below.An even simpler example(This was added to test that searching for a simple relationship works
even when the transitive query factory is not set.)Let’s create a very simple relation type, using strings as the source
and target types:>>> class IStringRelation(interface.Interface):
... name = interface.Attribute("The name of the value.")
... value = interface.Attribute("The value associated with the name.")>>> @interface.implementer(IStringRelation)
... class StringRelation(persistent.Persistent, Contained):
...
... def __init__(self, name, value):
... self.name = name
... self.value = value>>> app[u"string-relation-1"] = StringRelation("name1", "value1")
>>> app[u"string-relation-2"] = StringRelation("name2", "value2")>>> transaction.commit()We can now create an index that uses these:>>> from BTrees import OOBTree>>> sx = index.Index(
... ({"element": IStringRelation["name"],
... "load": None, "dump": None, "btree": OOBTree},
... {"element": IStringRelation["value"],
... "load": None, "dump": None, "btree": OOBTree},
... ))>>> app["sx"] = sx
>>> transaction.commit()And we’ll add the relations to the index:>>> app["sx"].index(app["string-relation-1"])
>>> app["sx"].index(app["string-relation-2"])Getting a relationship back out should be very simple. Let’s look for
all the values associates with “name1”:>>> query = sx.tokenizeQuery({"name": "name1"})
>>> list(sx.findValues("value", query))
['value1']Searching for empty setsWe’ve examined the most basic search capabilities. One other feature of the
index and search is that one can search for relationships to an empty set, or,
for single-value relationships like ‘reltype’ and ‘context’ in our
examples, None.Let’s add a relationship with a ‘manages’ relationshiptype, and no context; and
a relationship with a ‘commissioned’ relationship type, and a company context.Notice that there are two ways of adding indexes, by the way. We have already
seen that the index has an ‘index’ method that takes a relationship. Here we
use ‘index_doc’ which is a method defined in zope.index.interfaces.IInjection
that requires the token to already be generated. Since we are using intids
to tokenize the relationships, we must add them to the ZODB app object to give
them the possibility of a connection.>>> app['abeAndBran'] = rel = Relationship(
... (people['Abe'],), 'manages', (people['Bran'],))
>>> ix.index_doc(intids.register(rel), rel)
>>> app['abeAndVault'] = rel = SpecialRelationship(
... (people['Abe'],), 'commissioned',
... (projects['Vault design and implementation'],))
>>> IContextAwareRelationship(rel).setContext(companies['Zookd'])
>>> ix.index_doc(intids.register(rel), rel)Now we can search for Abe’s relationship that does not have a context. The
None value is always used to match both an empty set and a singleNonevalue.
The index does not support any other “empty” values at this time.>>> sorted(
... repr(load(t, ix, {})) for t in ix.findValueTokens(
... 'objects',
... q({'subjects': people['Abe']})))
["<Person 'Bran'>", "<Project 'Vault design and implementation'>"]
>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'objects', q({'subjects': people['Abe'], 'context': None}))]
[<Person 'Bran'>]
>>> sorted(
... repr(v) for v in ix.findValues(
... 'objects',
... q({'subjects': people['Abe']})))
["<Person 'Bran'>", "<Project 'Vault design and implementation'>"]
>>> list(ix.findValues(
... 'objects', q({'subjects': people['Abe'], 'context': None})))
[<Person 'Bran'>]Note that the index does not currently support searching for relationships that
have any value, or one of a set of values. This may be added at a later date;
the spelling for such queries are among the more troublesome parts.Working with transitive searchesIt’s possible to do transitive searches as well. This can let you find all
transitive bosses, or transitive subordinates, in our ‘manages’ relationship
type. Let’s set up some example relationships. Using letters to represent our
people, we’ll create three hierarchies like this:A JK R
/ \ / \
B C LM NOP S T U
/ \ | | /| | \
D E F Q V W X |
| | \--Y
H G |
| Z
IThis means that, for instance, person “A” (“Abe”) manages “B” (“Bran”) and “C”
(“Cathy”).We already have a relationship from Abe to Bran, so we’ll only be adding the
rest.>>> relmap = (
... ('A', 'C'), ('B', 'D'), ('B', 'E'), ('C', 'F'),
... ('F', 'G'), ('D', 'H'), ('H', 'I'), ('JK', 'LM'), ('JK', 'NOP'),
... ('LM', 'Q'), ('R', 'STU'), ('S', 'VW'), ('T', 'X'), ('UX', 'Y'),
... ('Y', 'Z'))
>>> letters = dict((name[0], ob) for name, ob in people.items())
>>> for subs, obs in relmap:
... subs = tuple(letters[l] for l in subs)
... obs = tuple(letters[l] for l in obs)
... app['%sManages%s' % (''.join(o.name for o in subs),
... ''.join(o.name for o in obs))] = rel = (
... Relationship(subs, 'manages', obs))
... ix.index(rel)
...Now we can do both transitive and intransitive searches. Here are a few
examples.>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'subjects',
... q({'objects': people['Ingrid'],
... 'reltype': 'manages'}))
... ]
[<Person 'Heather'>, <Person 'David'>, <Person 'Bran'>, <Person 'Abe'>]Here’s the same thing using findValues.>>> list(ix.findValues(
... 'subjects',
... q({'objects': people['Ingrid'],
... 'reltype': 'manages'})))
[<Person 'Heather'>, <Person 'David'>, <Person 'Bran'>, <Person 'Abe'>]Notice that they are in order, walking away from the search start. It also
is breadth-first–for instance, look at the list of superiors to Zane: Xen and
Uther come before Rob and Terry.>>> res = list(ix.findValues(
... 'subjects',
... q({'objects': people['Zane'], 'reltype': 'manages'})))
>>> res[0]
<Person 'Ygritte'>
>>> sorted(repr(p) for p in res[1:3])
["<Person 'Uther'>", "<Person 'Xen'>"]
>>> sorted(repr(p) for p in res[3:])
["<Person 'Rob'>", "<Person 'Terry'>"]Notice that all the elements of the search are maintained as it is walked–only
the transposed values are changed, and the rest remain statically. For
instance, notice the difference between these two results.>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'objects',
... q({'subjects': people['Cathy'], 'reltype': 'manages'}))]
[<Person 'Fred'>, <Person 'Gary'>]
>>> res = [load(t, ix, {}) for t in ix.findValueTokens(
... 'objects',
... q({'subjects': people['Cathy']}))]
>>> res[0]
<Person 'Fred'>
>>> sorted(repr(i) for i in res[1:])
["<Person 'Gary'>", "<Role 'Project Manager'>"]The first search got what we expected for our management relationshiptype–
walking from Cathy, the relationshiptype was maintained, and we only got the
Gary subordinate. The second search didn’t specify the relationshiptype, so
the transitive search included the Role we added first (Fred has the role of
Project Manager for the zope.org redesign).ThemaxDepthargument allows control over how far to search. For instance,
if we only want to search for Bran’s subordinates a maximum of two steps deep,
we can do so:>>> res = [load(t, ix, {}) for t in ix.findValueTokens(
... 'objects',
... q({'subjects': people['Bran']}),
... maxDepth=2)]
>>> sorted(repr(i) for i in res)
["<Person 'David'>", "<Person 'Emily'>", "<Person 'Heather'>"]The same is true for findValues.>>> res = list(ix.findValues(
... 'objects',
... q({'subjects': people['Bran']}), maxDepth=2))
>>> sorted(repr(i) for i in res)
["<Person 'David'>", "<Person 'Emily'>", "<Person 'Heather'>"]A minimum depth–a number of relationships that must be traversed before
results are desired–can also be achieved trivially using the targetFilter
argument described soon below. For now, we will continue in the order of the
arguments list, sofilteris up next.Thefilterargument takes an object (such as a function) that provides
interfaces.IFilter. As the interface lists, it receives the current chain
of relationship tokens (“relchain”), the original query that started the search
(“query”), the index object (“index”), and a dictionary that will be used
throughout the search and then discarded that can be used for optimizations
(“cache”). It should return a boolean, which determines whether the given
relchain should be used at all–traversed or returned. For instance, if
security dictates that the current user can only see certain relationships,
the filter could be used to make only the available relationships traversable.
Other uses are only getting relationships that were created after a given time,
or that have some annotation (available after resolving the token).Let’s look at an example of a filter that only allows relationships in a given
set, the way a security-based filter might work. We’ll then use it to model
a situation in which the current user can’t see that Ygritte is managed by
Uther, in addition to Xen.>>> s = set(intids.getId(r) for r in app.values()
... if IRelationship.providedBy(r))
>>> relset = list(
... ix.findRelationshipTokenSet(q({'subjects': people['Xen']})))
>>> len(relset)
1
>>> s.remove(relset[0])
>>> dump(people['Uther'], ix, {}) in list(
... ix.findValueTokens('subjects', q({'objects': people['Ygritte']})))
True
>>> dump(people['Uther'], ix, {}) in list(ix.findValueTokens(
... 'subjects', q({'objects': people['Ygritte']}),
... filter=lambda relchain, query, index, cache: relchain[-1] in s))
False
>>> people['Uther'] in list(
... ix.findValues('subjects', q({'objects': people['Ygritte']})))
True
>>> people['Uther'] in list(ix.findValues(
... 'subjects', q({'objects': people['Ygritte']}),
... filter=lambda relchain, query, index, cache: relchain[-1] in s))
FalseThe next two search arguments are the targetQuery and the targetFilter. They
both are filters on the output of the search methods, while not affecting the
traversal/search process. The targetQuery takes a query identical to the main
query, and the targetFilter takes an IFilter identical to the one used by thefilterargument. The targetFilter can do all of the work of the targetQuery,
but the targetQuery makes a common case–wanting to find the paths between two
objects, or if two objects are linked at all, for instance–convenient.We’ll skip over targetQuery for a moment (we’ll return when we revisitfindRelationshipChainsandisLinked), and look at targetFilter.
targetFilter can be used for many tasks, such as only returning values that
are in specially annotated relationships, or only returning values that have
traversed a certain hinge relationship in a two-part search, or other tasks.
A very simple one, though, is to effectively specify a minimum traversal depth.
Here, we find the people who are precisely two steps down from Bran, no more
and no less. We do it twice, once with findValueTokens and once with
findValues.>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'objects', q({'subjects': people['Bran']}), maxDepth=2,
... targetFilter=lambda relchain, q, i, c: len(relchain)>=2)]
[<Person 'Heather'>]
>>> list(ix.findValues(
... 'objects', q({'subjects': people['Bran']}), maxDepth=2,
... targetFilter=lambda relchain, q, i, c: len(relchain)>=2))
[<Person 'Heather'>]Heather is the only person precisely two steps down from Bran.Notice that we specified both maxDepth and targetFilter. We could have
received the same output by specifying a targetFilter oflen(relchain)==2and no maxDepth, but there is an important difference in efficiency. maxDepth
and filter can reduce the amount of work done by the index because they can
stop searching after reaching the maxDepth, or failing the filter; the
targetFilter and targetQuery arguments simply hide the results obtained, which
can reduce a bit of work in the case of getValues but generally don’t reduce
any of the traversal work.The last argument to the search methods istransitiveQueriesFactory. It is
a powertool that replaces the index’s default traversal factory for the
duration of the search. This allows custom traversal for individual searches,
and can support a number of advanced use cases. For instance, our index
assumes that you want to traverse objects and sources, and that the context
should be constant; that may not always be the desired traversal behavior. If
we had a relationship of PERSON1 TAUGHT PERSON2 (the lessons of PERSON3) then
to find the teachers of any given person you might want to traverse PERSON1,
but sometimes you might want to traverse PERSON3 as well. You can change the
behavior by providing a different factory.To show this example we will need to add a few more relationships. We will say
that Mary teaches Rob the lessons of Abe; Olaf teaches Zane the lessons of
Bran; Cathy teaches Bran the lessons of Lee; David teaches Abe the lessons of
Zane; and Emily teaches Mary the lessons of Ygritte.In the diagram, left-hand lines indicate “taught” and right-hand lines indicate
“the lessons of”, soE Y
\ /
Mshould be read as “Emily taught Mary the lessons of Ygritte”. Here’s the full
diagram:C L
\ /
O B
\ /
E Y D Z
\ / \ /
M A
\ /
\ /
RYou can see then that the transitive path of Rob’s teachers is Mary and Emily,
but the transitive path of Rob’s lessons is Abe, Zane, Bran, and Lee.Transitive queries factories must do extra work when the transitive walk is
across token types. We have used the TransposingTransitiveQueriesFactory to
build our transposers before, but now we need to write a custom one that
translates the tokens (ooh! a
TokenTranslatingTransposingTransitiveQueriesFactory! …maybe we won’t go that
far…).We will add the relationships, build the custom transitive factory, and then
again do the search work twice, once with findValueTokens and once with
findValues.>>> for triple in ('EMY', 'MRA', 'DAZ', 'OZB', 'CBL'):
... teacher, student, source = (letters[l] for l in triple)
... rel = SpecialRelationship((teacher,), 'taught', (student,))
... app['%sTaught%sTo%s' % (
... teacher.name, source.name, student.name)] = rel
... IContextAwareRelationship(rel).setContext(source)
... ix.index_doc(intids.register(rel), rel)
...>>> def transitiveFactory(relchain, query, index, cache):
... dynamic = cache.get('dynamic')
... if dynamic is None:
... intids = cache['intids'] = component.getUtility(
... IIntIds, context=index)
... static = cache['static'] = {}
... dynamic = cache['dynamic'] = []
... names = ['objects', 'context']
... for nm, val in query.items():
... try:
... ix = names.index(nm)
... except ValueError:
... static[nm] = val
... else:
... if dynamic:
... # both were specified: no transitive search known.
... del dynamic[:]
... cache['intids'] = False
... break
... else:
... dynamic.append(nm)
... dynamic.append(names[not ix])
... else:
... intids = component.getUtility(IIntIds, context=index)
... if dynamic[0] == 'objects':
... def translate(t):
... return dump(intids.getObject(t), index, cache)
... else:
... def translate(t):
... return intids.register(load(t, index, cache))
... cache['translate'] = translate
... else:
... static = cache['static']
... translate = cache['translate']
... if dynamic:
... for r in index.findValueTokenSet(relchain[-1], dynamic[1]):
... res = {dynamic[0]: translate(r)}
... res.update(static)
... yield res>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'subjects',
... q({'objects': people['Rob'], 'reltype': 'taught'}))]
[<Person 'Mary'>, <Person 'Emily'>]
>>> [intids.getObject(t) for t in ix.findValueTokens(
... 'context',
... q({'objects': people['Rob'], 'reltype': 'taught'}),
... transitiveQueriesFactory=transitiveFactory)]
[<Person 'Abe'>, <Person 'Zane'>, <Person 'Bran'>, <Person 'Lee'>]>>> list(ix.findValues(
... 'subjects',
... q({'objects': people['Rob'], 'reltype': 'taught'})))
[<Person 'Mary'>, <Person 'Emily'>]
>>> list(ix.findValues(
... 'context',
... q({'objects': people['Rob'], 'reltype': 'taught'}),
... transitiveQueriesFactory=transitiveFactory))
[<Person 'Abe'>, <Person 'Zane'>, <Person 'Bran'>, <Person 'Lee'>]transitiveQueryFactories can be very powerful, and we aren’t finished talking
about them in this document: see “Transitively mapping multiple elements”
below.We have now discussed, or at least mentioned, all of the available search
arguments. Theapplymethod’s ‘values’ search has the same arguments and
features asfindValues, so it can also do these transitive tricks. Let’s
get all of Karyn’s subordinates.>>> res = ix.apply({'values':
... {'resultName': 'objects', 'query':
... q({'reltype': 'manages',
... 'subjects': people['Karyn']})}})
>>> res # doctest: +ELLIPSIS
IFSet([...])
>>> sorted(repr(load(t, ix, {})) for t in res)
... # doctest: +NORMALIZE_WHITESPACE
["<Person 'Lee'>", "<Person 'Mary'>", "<Person 'Nancy'>",
"<Person 'Olaf'>", "<Person 'Perry'>", "<Person 'Quince'>"]As we return tofindRelationshipChainsandfindRelationshipTokenChains, we
also return to the search argument we postponed above: targetQuery.ThefindRelationshipChainsandfindRelationshipTokenChainscan simply find
all paths:>>> res = [repr([intids.getObject(t) for t in path]) for path in
... ix.findRelationshipTokenChains(
... q({'reltype': 'manages', 'subjects': people['Jim']}
... ))]
>>> len(res)
3
>>> sorted(res[:2]) # doctest: +NORMALIZE_WHITESPACE
["[<(<Person 'Jim'>, <Person 'Karyn'>) manages
(<Person 'Lee'>, <Person 'Mary'>)>]",
"[<(<Person 'Jim'>, <Person 'Karyn'>) manages
(<Person 'Nancy'>, <Person 'Olaf'>, <Person 'Perry'>)>]"]
>>> res[2] # doctest: +NORMALIZE_WHITESPACE
"[<(<Person 'Jim'>, <Person 'Karyn'>) manages
(<Person 'Lee'>, <Person 'Mary'>)>,
<(<Person 'Lee'>, <Person 'Mary'>) manages
(<Person 'Quince'>,)>]"
>>> res == [repr(list(p)) for p in
... ix.findRelationshipChains(
... q({'reltype': 'manages', 'subjects': people['Jim']}
... ))]
TrueLikefindValues, this is a breadth-first search.If we use a targetQuery withfindRelationshipChains, you can find all paths
between two searches. For instance, consider the paths between Rob and
Ygritte. While afindValuessearch would only include Rob once if asked to
search for supervisors, there are two paths. These can be found with the
targetQuery.>>> res = [repr([intids.getObject(t) for t in path]) for path in
... ix.findRelationshipTokenChains(
... q({'reltype': 'manages', 'subjects': people['Rob']}),
... targetQuery=q({'objects': people['Ygritte']}))]
>>> len(res)
2
>>> sorted(res[:2]) # doctest: +NORMALIZE_WHITESPACE
["[<(<Person 'Rob'>,) manages
(<Person 'Sam'>, <Person 'Terry'>, <Person 'Uther'>)>,
<(<Person 'Terry'>,) manages (<Person 'Xen'>,)>,
<(<Person 'Uther'>, <Person 'Xen'>) manages (<Person 'Ygritte'>,)>]",
"[<(<Person 'Rob'>,) manages
(<Person 'Sam'>, <Person 'Terry'>, <Person 'Uther'>)>,
<(<Person 'Uther'>, <Person 'Xen'>) manages (<Person 'Ygritte'>,)>]"]Here’s a query with no results:>>> len(list(ix.findRelationshipTokenChains(
... q({'reltype': 'manages', 'subjects': people['Rob']}),
... targetQuery=q({'objects': companies['Zookd']}))))
0You can combine targetQuery with targetFilter. Here we arbitrarily say we
are looking for a path between Rob and Ygritte that is at least 3 links long.>>> res = [repr([intids.getObject(t) for t in path]) for path in
... ix.findRelationshipTokenChains(
... q({'reltype': 'manages', 'subjects': people['Rob']}),
... targetQuery=q({'objects': people['Ygritte']}),
... targetFilter=lambda relchain, q, i, c: len(relchain)>=3)]
>>> len(res)
1
>>> res # doctest: +NORMALIZE_WHITESPACE
["[<(<Person 'Rob'>,) manages
(<Person 'Sam'>, <Person 'Terry'>, <Person 'Uther'>)>,
<(<Person 'Terry'>,) manages (<Person 'Xen'>,)>,
<(<Person 'Uther'>, <Person 'Xen'>) manages (<Person 'Ygritte'>,)>]"]isLinkedtakes the same arguments as all of the other transitive-aware
methods. For instance, Rob and Ygritte are transitively linked, but Abe and
Zane are not.>>> ix.isLinked(
... q({'reltype': 'manages', 'subjects': people['Rob']}),
... targetQuery=q({'objects': people['Ygritte']}))
True
>>> ix.isLinked(
... q({'reltype': 'manages', 'subjects': people['Abe']}),
... targetQuery=q({'objects': people['Ygritte']}))
FalseDetecting cyclesSuppose we’re modeling a ‘king in disguise’: someone high up in management also
works as a peon to see how his employees’ lives are. We could model this a
number of ways that might make more sense than what we’ll do now, but to show
cycles at work we’ll just add an additional relationship so that Abe works for
Gary. That means that the very longest path from Ingrid up gets a lot longer–
in theory, it’s infinitely long, because of the cycle.The index keeps track of this and stops right when the cycle happens, and right
before the cycle duplicates any relationships. It marks the chain that has
cycle as a special kind of tuple that implements ICircularRelationshipPath.
The tuple has a ‘cycled’ attribute that contains the one or more searches
that would be equivalent to following the cycle (given the same transitiveMap).Let’s actually look at the example we described.>>> res = list(ix.findRelationshipTokenChains(
... q({'objects': people['Ingrid'], 'reltype': 'manages'})))
>>> len(res)
4
>>> len(res[3])
4
>>> interfaces.ICircularRelationshipPath.providedBy(res[3])
False
>>> rel = Relationship(
... (people['Gary'],), 'manages', (people['Abe'],))
>>> app['GaryManagesAbe'] = rel
>>> ix.index(rel)
>>> res = list(ix.findRelationshipTokenChains(
... q({'objects': people['Ingrid'], 'reltype': 'manages'})))
>>> len(res)
8
>>> len(res[7])
8
>>> interfaces.ICircularRelationshipPath.providedBy(res[7])
True
>>> [sorted(ix.resolveQuery(search).items()) for search in res[7].cycled]
[[('objects', <Person 'Abe'>), ('reltype', 'manages')]]
>>> tuple(ix.resolveRelationshipTokens(res[7]))
... # doctest: +NORMALIZE_WHITESPACE
(<(<Person 'Heather'>,) manages (<Person 'Ingrid'>,)>,
<(<Person 'David'>,) manages (<Person 'Heather'>,)>,
<(<Person 'Bran'>,) manages (<Person 'David'>,)>,
<(<Person 'Abe'>,) manages (<Person 'Bran'>,)>,
<(<Person 'Gary'>,) manages (<Person 'Abe'>,)>,
<(<Person 'Fred'>,) manages (<Person 'Gary'>,)>,
<(<Person 'Cathy'>,) manages (<Person 'Fred'>,)>,
<(<Person 'Abe'>,) manages (<Person 'Cathy'>,)>)The same kind of thing works forfindRelationshipChains. Notice that the
query in the .cycled attribute is not resolved: it is still the query that
would be needed to continue the cycle.>>> res = list(ix.findRelationshipChains(
... q({'objects': people['Ingrid'], 'reltype': 'manages'})))
>>> len(res)
8
>>> len(res[7])
8
>>> interfaces.ICircularRelationshipPath.providedBy(res[7])
True
>>> [sorted(ix.resolveQuery(search).items()) for search in res[7].cycled]
[[('objects', <Person 'Abe'>), ('reltype', 'manages')]]
>>> res[7] # doctest: +NORMALIZE_WHITESPACE
cycle(<(<Person 'Heather'>,) manages (<Person 'Ingrid'>,)>,
<(<Person 'David'>,) manages (<Person 'Heather'>,)>,
<(<Person 'Bran'>,) manages (<Person 'David'>,)>,
<(<Person 'Abe'>,) manages (<Person 'Bran'>,)>,
<(<Person 'Gary'>,) manages (<Person 'Abe'>,)>,
<(<Person 'Fred'>,) manages (<Person 'Gary'>,)>,
<(<Person 'Cathy'>,) manages (<Person 'Fred'>,)>,
<(<Person 'Abe'>,) manages (<Person 'Cathy'>,)>)Notice that there is nothing special about the new relationship, by the way.
If we had started to look for Fred’s supervisors, the cycle marker would have
been given for the relationship that points back to Fred as a supervisor to
himself. There’s no way for the computer to know which is the “cause” without
further help and policy.Handling cycles can be tricky. Now imagine that we have a cycle that involves
a relationship with two objects, only one of which causes the cycle. The other
object should continue to be followed.For instance, lets have Q manage L and Y. The link to L will be a cycle, but
the link to Y is not, and should be followed. This means that only the middle
relationship chain will be marked as a cycle.>>> rel = Relationship((people['Quince'],), 'manages',
... (people['Lee'], people['Ygritte']))
>>> app['QuinceManagesLeeYgritte'] = rel
>>> ix.index_doc(intids.register(rel), rel)
>>> res = [p for p in ix.findRelationshipTokenChains(
... q({'reltype': 'manages', 'subjects': people['Mary']}))]
>>> [interfaces.ICircularRelationshipPath.providedBy(p) for p in res]
[False, True, False]
>>> [[intids.getObject(t) for t in p] for p in res]
... # doctest: +NORMALIZE_WHITESPACE
[[<(<Person 'Lee'>, <Person 'Mary'>) manages (<Person 'Quince'>,)>],
[<(<Person 'Lee'>, <Person 'Mary'>) manages (<Person 'Quince'>,)>,
<(<Person 'Quince'>,) manages (<Person 'Lee'>, <Person 'Ygritte'>)>],
[<(<Person 'Lee'>, <Person 'Mary'>) manages (<Person 'Quince'>,)>,
<(<Person 'Quince'>,) manages (<Person 'Lee'>, <Person 'Ygritte'>)>,
<(<Person 'Ygritte'>,) manages (<Person 'Zane'>,)>]]
>>> [sorted(
... (nm, nm == 'reltype' and t or load(t, ix, {}))
... for nm, t in search.items()) for search in res[1].cycled]
[[('reltype', 'manages'), ('subjects', <Person 'Lee'>)]]Transitively mapping multiple elementsTransitive searches can do whatever searches the transitiveQueriesFactory
returns, which means that complex transitive behavior can be modeled. For
instance, imagine genealogical relationships. Let’s say the basic
relationship is “MALE and FEMALE had CHILDREN”. Walking transitively to get
ancestors or descendants would need to distinguish between male children and
female children in order to correctly generate the transitive search. This
could be accomplished by resolving each child token and examining the object
or, probably more efficiently, getting an indexed collection of males and
females (and cacheing it in the cache dictionary for further transitive steps)
and checking the gender by membership in the indexed collections. Either of
these approaches could be performed by a transitiveQueriesFactory. A full
example is left as an exercise to the reader.Lies, damn lies, and statisticsThe zope.index.interfaces.IStatistics methods are implemented to provide
minimal introspectability. wordCount always returns 0, because words are
irrelevant to this kind of index. documentCount returns the number of
relationships indexed.>>> ix.wordCount()
0
>>> ix.documentCount()
25Reindexing and removing relationshipsUsing an index over an application’s lifecycle usually requires changes to the
indexed objects. As per the zope.index interfaces,index_doccan reindex
relationships,unindex_doccan remove them, andclearcan clear the entire
index.Here we change the zope.org project manager from Fred to Emily.>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']}))]
[<Person 'Fred'>]
>>> rel = intids.getObject(list(ix.findRelationshipTokenSet(
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})))[0])
>>> rel.subjects = (people['Emily'],)
>>> ix.index_doc(intids.register(rel), rel)
>>> q = ix.tokenizeQuery
>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']}))]
[<Person 'Emily'>]Here we remove the relationship that made a cycle for Abe in the ‘king in
disguise’ scenario.>>> res = list(ix.findRelationshipTokenChains(
... q({'objects': people['Ingrid'],
... 'reltype': 'manages'})))
>>> len(res)
8
>>> len(res[7])
8
>>> interfaces.ICircularRelationshipPath.providedBy(res[7])
True
>>> rel = intids.getObject(list(ix.findRelationshipTokenSet(
... q({'subjects': people['Gary'], 'reltype': 'manages',
... 'objects': people['Abe']})))[0])
>>> ix.unindex(rel) # == ix.unindex_doc(intids.getId(rel))
>>> ix.documentCount()
24
>>> res = list(ix.findRelationshipTokenChains(
... q({'objects': people['Ingrid'], 'reltype': 'manages'})))
>>> len(res)
4
>>> len(res[3])
4
>>> interfaces.ICircularRelationshipPath.providedBy(res[3])
FalseFinally we clear out the whole index.>>> ix.clear()
>>> ix.documentCount()
0
>>> list(ix.findRelationshipTokenChains(
... q({'objects': people['Ingrid'], 'reltype': 'manages'})))
[]
>>> [load(t, ix, {}) for t in ix.findValueTokens(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']}))]
[]Optimizing relationship index useThere are three optimization opportunities built into the index.use the cache to load and dump tokens;don’t load or dump tokens (the values themselves may be used as tokens); andhave the returned value be of the same btree family as the result family.For some operations, particularly with hundreds or thousands of members in a
single relationship value, some of these optimizations can speed up some
common-case reindexing work by around 100 times.The easiest (and perhaps least useful) optimization is that all dump
calls and all load calls generated by a single operation share a cache
dictionary per call type (dump/load), per indexed relationship value.
Therefore, for instance, we could stash an intids utility, so that we
only had to do a utility lookup once, and thereafter it was only a
single dictionary lookup. This is what the defaultgenerateTokenandresolveTokenfunctions in index.py do: look at them for an example.A further optimization is to not load or dump tokens at all, but use values
that may be tokens. This will be particularly useful if the tokens have
__cmp__ (or equivalent) in C, such as built-in types like ints. To specify
this behavior, you create an index with the ‘load’ and ‘dump’ values for the
indexed attribute descriptions explicitly set to None.>>> ix = index.Index(
... ({'element': IRelationship['subjects'], 'multiple': True,
... 'dump': None, 'load': None},
... {'element': IRelationship['relationshiptype'],
... 'dump': relTypeDump, 'load': relTypeLoad, 'btree': OIBTree,
... 'name': 'reltype'},
... {'element': IRelationship['objects'], 'multiple': True,
... 'dump': None, 'load': None},
... {'element': IContextAwareRelationship['getContext'],
... 'name': 'context'}),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
...
>>> sm['rel_index_2'] = ix
>>> app['ex_rel_1'] = rel = Relationship((1,), 'has the role of', (2,))
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 1}))
[2]Finally, if you have single relationships that relate hundreds or thousands
of objects, it can be a huge win if the value is a ‘multiple’ of the same type
as the stored BTree for the given attribute. The default BTree family for
attributes is IFBTree; IOBTree is also a good choice, and may be preferrable
for some applications.>>> ix = index.Index(
... ({'element': IRelationship['subjects'], 'multiple': True,
... 'dump': None, 'load': None},
... {'element': IRelationship['relationshiptype'],
... 'dump': relTypeDump, 'load': relTypeLoad, 'btree': OIBTree,
... 'name': 'reltype'},
... {'element': IRelationship['objects'], 'multiple': True,
... 'dump': None, 'load': None},
... {'element': IContextAwareRelationship['getContext'],
... 'name': 'context'}),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
...
>>> sm['rel_index_3'] = ix
>>> from BTrees import IFBTree
>>> app['ex_rel_2'] = rel = Relationship(
... IFBTree.IFTreeSet((1,)), 'has the role of', IFBTree.IFTreeSet())
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 1}))
[]
>>> list(ix.findValueTokens('subjects', {'objects': None}))
[1]Reindexing is where some of the big improvements can happen. The following
gyrations exercise the optimization code.>>> rel.objects.insert(2)
1
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 1}))
[2]
>>> rel.subjects = IFBTree.IFTreeSet((3,4,5))
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 3}))
[2]>>> rel.subjects.insert(6)
1
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 6}))
[2]>>> rel.subjects.update(range(100, 200))
100
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 100}))
[2]>>> rel.subjects = IFBTree.IFTreeSet((3,4,5,6))
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 3}))
[2]>>> rel.subjects = IFBTree.IFTreeSet(())
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 3}))
[]>>> rel.subjects = IFBTree.IFTreeSet((3,4,5))
>>> ix.index(rel)
>>> list(ix.findValueTokens('objects', {'subjects': 3}))
[2]tokenizeValues and resolveValueTokens work correctly without loaders and
dumpers–that is, they do nothing.>>> ix.tokenizeValues((3,4,5), 'subjects')
(3, 4, 5)
>>> ix.resolveValueTokens((3,4,5), 'subjects')
(3, 4, 5)__contains__ and UnindexingYou can test whether a relationship is in an index with __contains__. Note
that this uses the actual relationship, not the relationship token.>>> ix = index.Index(
... ({'element': IRelationship['subjects'], 'multiple': True,
... 'dump': dump, 'load': load},
... {'element': IRelationship['relationshiptype'],
... 'dump': relTypeDump, 'load': relTypeLoad, 'btree': OIBTree,
... 'name': 'reltype'},
... {'element': IRelationship['objects'], 'multiple': True,
... 'dump': dump, 'load': load},
... {'element': IContextAwareRelationship['getContext'],
... 'name': 'context'}),
... index.TransposingTransitiveQueriesFactory('subjects', 'objects'))
>>> ix.documentCount()
0
>>> app['fredisprojectmanager'].subjects = (people['Fred'],)
>>> ix.index(app['fredisprojectmanager'])
>>> ix.index(app['another_rel'])
>>> ix.documentCount()
2
>>> app['fredisprojectmanager'] in ix
True
>>> list(ix.findValues(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})))
[<Person 'Fred'>]>>> app['another_rel'] in ix
True>>> app['abeAndBran'] in ix
FalseAs noted, you can unindex using unindex(relationship) or
unindex_doc(relationship token).>>> ix.unindex_doc(ix.tokenizeRelationship(app['fredisprojectmanager']))
>>> app['fredisprojectmanager'] in ix
False
>>> list(ix.findValues(
... 'subjects',
... q({'reltype': 'has the role of',
... 'objects': roles['Project Manager'],
... 'context': projects['zope.org redesign']})))
[]>>> ix.unindex(app['another_rel'])
>>> app['another_rel'] in ix
FalseAs defined by zope.index.interfaces.IInjection, if the relationship is
not in the index then calling unindex_doc is a no-op; the same holds
true for unindex.>>> ix.unindex(app['abeAndBran'])
>>> ix.unindex_doc(ix.tokenizeRelationship(app['abeAndBran']))[11]applyand the other zope.index-related methods are the obvious
exceptions.RelationshipContainerThe relationship container holds IRelationship objects. It includes an API to
search for relationships and the objects to which they link, transitively and
intransitively. The relationships are objects in and of themselves, and they
can themselves be related as sources or targets in other relationships.There are currently two implementations of the interface in this package. One
uses intids, and the other uses key references. They have different
advantages and disadvantages.The intids makes it possible to get intid values directly. This can make it
easier to merge the results with catalog searches and other intid-based
indexes. Possibly more importantly, it does not create ghosted objects for
the relationships as they are searched unless absolutely necessary (for
instance, using a relationship filter), but uses the intids alone for
searches. This can be very important if you are searching large databases of
relationships: the relationship objects and the associated keyref links in the
other implementation can flush the entire ZODB object cache, possibly leading
to unpleasant performance characteristics for your entire application.On the other hand, there are a limited number of intids available: sys.maxint,
or 2147483647 on a 32 bit machine. As the intid usage increases, the
efficiency of finding unique intids decreases. This can be addressed by
increasing IOBTrees maximum integer to be 64 bit (9223372036854775807) or by
using the keyref implementation. The keyref implementation also eliminates a
dependency–the intid utility itself–if that is desired. This can be
important if you can’t rely on having an intid utility, or if objects to be
related span intid utilities. Finally, it’s possible that the direct
attribute access that underlies the keyref implementation might be quicker
than the intid dereferencing, but this is unproven and may be false.For our examples, we’ll assume we’ve already imported a container and a
relationship from one of the available sources. You can use a relationship
specific to your usage, or the generic one in shared, as long as it meets the
interface requirements.It’s also important to note that, while the relationship objects are an
important part of the design, they should not be abused. If you want to store
other data on the relationship, it should be stored in another persistent
object, such as an attribute annotation’s btree. Typically relationship
objects will differ on the basis of interfaces, annotations, and possibly
small lightweight values on the objects themselves.We’ll assume that there is an application namedappwith 30 objects in it
(named ‘ob0’ through ‘ob29’) that we’ll be relating.Creating a relationship container is easy. We’ll use an abstract Container,
but it could be from the keyref or the intid modules.>>> from zc.relationship import interfaces
>>> container = Container()
>>> from zope.interface.verify import verifyObject
>>> verifyObject(interfaces.IRelationshipContainer, container)
TrueThe containers can be used as parts of other objects, or as standalone local
utilities. Here’s an example of adding one as a local utilty.>>> sm = app.getSiteManager()
>>> sm['lineage_relationship'] = container
>>> import zope.interface.interfaces
>>> registry = zope.interface.interfaces.IComponentRegistry(sm)
>>> registry.registerUtility(
... container, interfaces.IRelationshipContainer, 'lineage')
>>> import transaction
>>> transaction.commit()Adding relationships is also easy: instantiate and add. Theaddmethod adds
objects and assigns them random alphanumeric keys.>>> rel = Relationship((app['ob0'],), (app['ob1'],))
>>> verifyObject(interfaces.IRelationship, rel)
True
>>> container.add(rel)Although the container does not have__setitem__and__delitem__(definingaddandremoveinstead), it does define the read-only elements of the basic
Python mapping interface.>>> container[rel.__name__] is rel
True
>>> len(container)
1
>>> list(container.keys()) == [rel.__name__]
True
>>> list(container) == [rel.__name__]
True
>>> list(container.values()) == [rel]
True
>>> container.get(rel.__name__) is rel
True
>>> container.get('17') is None
True
>>> rel.__name__ in container
True
>>> '17' in container
False
>>> list(container.items()) == [(rel.__name__, rel)]
TrueIt also supports four searching methods:findTargets,findSources,findRelationships, andisLinked. Let’s add a few more relationships and
examine some relatively simple cases.>>> container.add(Relationship((app['ob1'],), (app['ob2'],)))
>>> container.add(Relationship((app['ob1'],), (app['ob3'],)))
>>> container.add(Relationship((app['ob0'],), (app['ob3'],)))
>>> container.add(Relationship((app['ob0'],), (app['ob4'],)))
>>> container.add(Relationship((app['ob2'],), (app['ob5'],)))
>>> transaction.commit() # this is indicative of a bug in ZODB; if you
... # do not do this then new objects will deactivate themselves into
... # nothingness when _p_deactivate is calledNow there are six direct relationships (all of the relationships point down
in the diagram):ob0
| |\
ob1 | |
| | | |
ob2 ob3 ob4
|
ob5The mapping methods still have kept up with the new additions.>>> len(container)
6
>>> len(container.keys())
6
>>> sorted(container.keys()) == sorted(
... v.__name__ for v in container.values())
True
>>> sorted(container.items()) == sorted(
... zip(container.keys(), container.values()))
True
>>> len([v for v in container.values() if container[v.__name__] is v])
6
>>> sorted(container.keys()) == sorted(container)
TrueMore interestingly, lets examine some of the searching methods. What are the
direct targets of ob0?>>> container.findTargets(app['ob0']) # doctest: +ELLIPSIS
<generator object ...>Ah-ha! It’s a generator! Let’s try that again.>>> sorted(o.id for o in container.findTargets(app['ob0']))
['ob1', 'ob3', 'ob4']OK, what about the ones no more than two relationships away? We use themaxDepthargument, which is the second placeful argument.>>> sorted(o.id for o in container.findTargets(app['ob0'], 2))
['ob1', 'ob2', 'ob3', 'ob4']Notice that, even though ob3 is available both through one and two
relationships, it is returned only once.Passing in None will get all related objects–the same here as passing in 3, or
any greater integer.>>> sorted(o.id for o in container.findTargets(app['ob0'], None))
['ob1', 'ob2', 'ob3', 'ob4', 'ob5']
>>> sorted(o.id for o in container.findTargets(app['ob0'], 3))
['ob1', 'ob2', 'ob3', 'ob4', 'ob5']
>>> sorted(o.id for o in container.findTargets(app['ob0'], 25))
['ob1', 'ob2', 'ob3', 'ob4', 'ob5']This is true even if we put in a cycle. We’ll put in a cycle between ob5 and
ob1 and look at the results.An important aspect of the algorithm used is that it returns closer
relationships first, which we can begin to see here.>>> container.add(Relationship((app['ob5'],), (app['ob1'],)))
>>> transaction.commit()
>>> sorted(o.id for o in container.findTargets(app['ob0'], None))
['ob1', 'ob2', 'ob3', 'ob4', 'ob5']
>>> res = list(o.id for o in container.findTargets(app['ob0'], None))
>>> sorted(res[:3]) # these are all one step away
['ob1', 'ob3', 'ob4']
>>> res[3:] # ob 2 is two steps, and ob5 is three steps.
['ob2', 'ob5']When you see the source in the targets, you know you are somewhere inside a
cycle.>>> sorted(o.id for o in container.findTargets(app['ob1'], None))
['ob1', 'ob2', 'ob3', 'ob5']
>>> sorted(o.id for o in container.findTargets(app['ob2'], None))
['ob1', 'ob2', 'ob3', 'ob5']
>>> sorted(o.id for o in container.findTargets(app['ob5'], None))
['ob1', 'ob2', 'ob3', 'ob5']If you ask for objects of a distance that is not a positive integer, you’ll get
a ValueError.>>> container.findTargets(app['ob0'], 0)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.findTargets(app['ob0'], -1)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.findTargets(app['ob0'], 'kumquat') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ...ThefindSourcesmethod is the mirror offindTargets: given a target, it
finds all sources. Using the same relationship tree built above, we’ll search
for some sources.>>> container.findSources(app['ob0']) # doctest: +ELLIPSIS
<generator object ...>
>>> list(container.findSources(app['ob0']))
[]
>>> list(o.id for o in container.findSources(app['ob4']))
['ob0']
>>> list(o.id for o in container.findSources(app['ob4'], None))
['ob0']
>>> sorted(o.id for o in container.findSources(app['ob1']))
['ob0', 'ob5']
>>> sorted(o.id for o in container.findSources(app['ob1'], 2))
['ob0', 'ob2', 'ob5']
>>> sorted(o.id for o in container.findSources(app['ob1'], 3))
['ob0', 'ob1', 'ob2', 'ob5']
>>> sorted(o.id for o in container.findSources(app['ob1'], None))
['ob0', 'ob1', 'ob2', 'ob5']
>>> sorted(o.id for o in container.findSources(app['ob3']))
['ob0', 'ob1']
>>> sorted(o.id for o in container.findSources(app['ob3'], None))
['ob0', 'ob1', 'ob2', 'ob5']
>>> list(o.id for o in container.findSources(app['ob5']))
['ob2']
>>> list(o.id for o in container.findSources(app['ob5'], maxDepth=2))
['ob2', 'ob1']
>>> sorted(o.id for o in container.findSources(app['ob5'], maxDepth=3))
['ob0', 'ob1', 'ob2', 'ob5']
>>> container.findSources(app['ob0'], 0)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.findSources(app['ob0'], -1)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.findSources(app['ob0'], 'kumquat') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ...ThefindRelationshipsmethod finds all relationships from, to, or between
two objects. Because it supports transitive relationships, each member of the
resulting iterator is a tuple of one or more relationships.All arguments to findRelationships are optional, but at least one ofsourceortargetmust be passed in. A search depth defaults to one relationship
deep, like the other methods.>>> container.findRelationships(source=app['ob0']) # doctest: +ELLIPSIS
<generator object ...>
>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(source=app['ob0']))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob4>,)>']]
>>> list(container.findRelationships(target=app['ob0']))
[]
>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(target=app['ob3']))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>']]
>>> list(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... source=app['ob1'], target=app['ob3']))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>']]
>>> container.findRelationships()
Traceback (most recent call last):
...
ValueError: at least one of `source` and `target` must be providedThey may also be used as positional arguments, with the ordersourceandtarget.>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(app['ob1']))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>'],
['<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>']]
>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(app['ob5'], app['ob1']))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob5>,) to (<Demo ob1>,)>']]maxDepthis again available, but it is the third positional argument now, so
keyword usage will be more frequent than with the others. Notice that the
second path has two members: from ob1 to ob2, then from ob2 to ob5.>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(app['ob1'], maxDepth=2))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>'],
['<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>',
'<Relationship from (<Demo ob2>,) to (<Demo ob5>,)>'],
['<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>']]Unique relationships are returned, rather than unique objects. Therefore,
while ob3 only has two transitive sources, ob1 and ob0, it has three transitive
paths.>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... target=app['ob3'], maxDepth=2))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob5>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>']]The same is true for the targets of ob0.>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... source=app['ob0'], maxDepth=2))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob4>,)>']]Cyclic relationships are returned in a special tuple that implements
ICircularRelationshipPath. For instance, consider all of the paths that lead
from ob0. Notice first that all the paths are in order from shortest to
longest.>>> res = list(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... app['ob0'], maxDepth=None))
... # doctest: +NORMALIZE_WHITESPACE
>>> sorted(res[:3]) # one step away # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob3>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob4>,)>']]
>>> sorted(res[3:5]) # two steps away # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob3>,)>']]
>>> res[5:] # three and four steps away # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>',
'<Relationship from (<Demo ob2>,) to (<Demo ob5>,)>'],
['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>',
'<Relationship from (<Demo ob2>,) to (<Demo ob5>,)>',
'<Relationship from (<Demo ob5>,) to (<Demo ob1>,)>']]The very last one is circular.Now we’ll change the expression to only include paths that implement
ICircularRelationshipPath.>>> list(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... app['ob0'], maxDepth=None)
... if interfaces.ICircularRelationshipPath.providedBy(path))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>',
'<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>',
'<Relationship from (<Demo ob2>,) to (<Demo ob5>,)>',
'<Relationship from (<Demo ob5>,) to (<Demo ob1>,)>']]Note that, because relationships may have multiple targets, a relationship
that has a cycle may still be traversed for targets that do not generate a
cycle. The further paths will not be marked as a cycle.Cycle paths not only have a marker interface to identify them, but include acycledattribute that is a frozenset of the one or more searches that would
be equivalent to following the cycle(s). If a source is provided, the searches
cycled searches would continue from the end of the path.>>> path = [path for path in container.findRelationships(
... app['ob0'], maxDepth=None)
... if interfaces.ICircularRelationshipPath.providedBy(path)][0]
>>> path.cycled
[{'source': <Demo ob1>}]
>>> app['ob1'] in path[-1].targets
TrueIf only a target is provided, thecycledsearch will continue from the
first relationship in the path.>>> path = [path for path in container.findRelationships(
... target=app['ob5'], maxDepth=None)
... if interfaces.ICircularRelationshipPath.providedBy(path)][0]
>>> path # doctest: +NORMALIZE_WHITESPACE
cycle(<Relationship from (<Demo ob5>,) to (<Demo ob1>,)>,
<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>,
<Relationship from (<Demo ob2>,) to (<Demo ob5>,)>)
>>> path.cycled
[{'target': <Demo ob5>}]maxDepth can also be used with the combination of source and target.>>> list(container.findRelationships(
... app['ob0'], app['ob5'], maxDepth=None))
... # doctest: +NORMALIZE_WHITESPACE
[(<Relationship from (<Demo ob0>,) to (<Demo ob1>,)>,
<Relationship from (<Demo ob1>,) to (<Demo ob2>,)>,
<Relationship from (<Demo ob2>,) to (<Demo ob5>,)>)]As usual, maxDepth must be a positive integer or None.>>> container.findRelationships(app['ob0'], maxDepth=0)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.findRelationships(app['ob0'], maxDepth=-1)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.findRelationships(app['ob0'], maxDepth='kumquat')
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ...TheisLinkedmethod is a convenient way to test if two objects are linked,
or if an object is a source or target in the graph. It defaults to a maxDepth
of 1.>>> container.isLinked(app['ob0'], app['ob1'])
True
>>> container.isLinked(app['ob0'], app['ob2'])
FalseNote that maxDepth is pointless when supplying only one of source or target.>>> container.isLinked(source=app['ob29'])
False
>>> container.isLinked(target=app['ob29'])
False
>>> container.isLinked(source=app['ob0'])
True
>>> container.isLinked(target=app['ob4'])
True
>>> container.isLinked(source=app['ob4'])
False
>>> container.isLinked(target=app['ob0'])
FalseSetting maxDepth works as usual when searching for a link between two objects,
though.>>> container.isLinked(app['ob0'], app['ob2'], maxDepth=2)
True
>>> container.isLinked(app['ob0'], app['ob5'], maxDepth=2)
False
>>> container.isLinked(app['ob0'], app['ob5'], maxDepth=3)
True
>>> container.isLinked(app['ob0'], app['ob5'], maxDepth=None)
TrueAs usual, maxDepth must be a positive integer or None.>>> container.isLinked(app['ob0'], app['ob1'], maxDepth=0)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.isLinked(app['ob0'], app['ob1'], maxDepth=-1)
Traceback (most recent call last):
...
ValueError: maxDepth must be None or a positive integer
>>> container.isLinked(app['ob0'], app['ob1'], maxDepth='kumquat')
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: ...Theremovemethod is the next to last of the core interface: it allows you
to remove relationships from a container. It takes a relationship object.As an example, let’s remove the relationship from ob5 to ob1 that we created
to make the cycle.>>> res = list(container.findTargets(app['ob2'], None)) # before removal
>>> len(res)
4
>>> res[:2]
[<Demo ob5>, <Demo ob1>]
>>> sorted(repr(o) for o in res[2:])
['<Demo ob2>', '<Demo ob3>']
>>> res = list(container.findSources(app['ob2'], None)) # before removal
>>> res[0]
<Demo ob1>
>>> res[3]
<Demo ob2>
>>> sorted(repr(o) for o in res[1:3])
['<Demo ob0>', '<Demo ob5>']
>>> rel = list(container.findRelationships(app['ob5'], app['ob1']))[0][0]
>>> rel.sources
(<Demo ob5>,)
>>> rel.targets
(<Demo ob1>,)
>>> container.remove(rel)
>>> list(container.findRelationships(app['ob5'], app['ob1']))
[]
>>> list(container.findTargets(app['ob2'], None)) # after removal
[<Demo ob5>]
>>> list(container.findSources(app['ob2'], None)) # after removal
[<Demo ob1>, <Demo ob0>]Finally, thereindexmethod allows objects already in the container to be
reindexed. The default implementation of the relationship objects calls this
automatically when sources and targets are changed.To reiterate, the relationships looked like this before.ob0
| |\
ob1 | |
| | | |
ob2 ob3 ob4
|
ob5We’ll switch out ob3 and ob4, so the diagram looks like this.ob0
| |\
ob1 | |
| | | |
ob2 ob4 ob3
|
ob5
>>> sorted(ob.id for ob in container.findTargets(app['ob1']))
['ob2', 'ob3']
>>> sorted(ob.id for ob in container.findSources(app['ob3']))
['ob0', 'ob1']
>>> sorted(ob.id for ob in container.findSources(app['ob4']))
['ob0']
>>> rel = next(
... iter(container.findRelationships(app['ob1'], app['ob3'])
... ))[0]
>>> rel.targets
(<Demo ob3>,)
>>> rel.targets = [app['ob4']] # this calls reindex
>>> rel.targets
(<Demo ob4>,)
>>> sorted(ob.id for ob in container.findTargets(app['ob1']))
['ob2', 'ob4']
>>> sorted(ob.id for ob in container.findSources(app['ob3']))
['ob0']
>>> sorted(ob.id for ob in container.findSources(app['ob4']))
['ob0', 'ob1']The same sort of thing happens if we change sources. We’ll change the
diagram to look like this.ob0
| |\
ob1 | |
| | |
ob2 | ob3
| \ |
ob5 ob4
>>> rel.sources
(<Demo ob1>,)
>>> rel.sources = (app['ob2'],) # this calls reindex
>>> rel.sources
(<Demo ob2>,)
>>> sorted(ob.id for ob in container.findTargets(app['ob1']))
['ob2']
>>> sorted(ob.id for ob in container.findTargets(app['ob2']))
['ob4', 'ob5']
>>> sorted(ob.id for ob in container.findTargets(app['ob0']))
['ob1', 'ob3', 'ob4']
>>> sorted(ob.id for ob in container.findSources(app['ob4']))
['ob0', 'ob2']Advanced UsageThere are four other advanced tricks that the relationship container can do:
enable search filters; allow multiple sources and targets for a single
relationship; allow relating relationships; and exposing unresolved token
results.Search FiltersBecause relationships are objects themselves, a number of interesting usages
are possible. They can implement additional interfaces, have annotations,
and have other attributes. One use for this is to only find objects along
relationship paths with relationships that provide a given interface. Thefilterargument, allowed infindSources,findTargets,findRelationships, andisLinked, supports this kind of use case.For instance, imagine that we change the relationships to look like the diagram
below. Thexxxlines indicate a relationship that implements
ISpecialRelationship.ob0
x |x
ob1 | x
x | x
ob2 | ob3
| x |
ob5 ob4That is, the relationships from ob0 to ob1, ob0 to ob3, ob1 to ob2, and ob2 to
ob4 implement the special interface. Let’s make this happen first.>>> from zope import interface
>>> class ISpecialInterface(interface.Interface):
... """I'm special! So special!"""
...
>>> for src, tgt in (
... (app['ob0'], app['ob1']),
... (app['ob0'], app['ob3']),
... (app['ob1'], app['ob2']),
... (app['ob2'], app['ob4'])):
... rel = list(container.findRelationships(src, tgt))[0][0]
... interface.directlyProvides(rel, ISpecialInterface)
...Now we can useISpecialInterface.providedByas a filter for all of the
methods mentioned above.findTargets>>> sorted(ob.id for ob in container.findTargets(app['ob0']))
['ob1', 'ob3', 'ob4']
>>> sorted(ob.id for ob in container.findTargets(
... app['ob0'], filter=ISpecialInterface.providedBy))
['ob1', 'ob3']
>>> sorted(ob.id for ob in container.findTargets(
... app['ob0'], maxDepth=None))
['ob1', 'ob2', 'ob3', 'ob4', 'ob5']
>>> sorted(ob.id for ob in container.findTargets(
... app['ob0'], maxDepth=None, filter=ISpecialInterface.providedBy))
['ob1', 'ob2', 'ob3', 'ob4']findSources>>> sorted(ob.id for ob in container.findSources(app['ob4']))
['ob0', 'ob2']
>>> sorted(ob.id for ob in container.findSources(
... app['ob4'], filter=ISpecialInterface.providedBy))
['ob2']
>>> sorted(ob.id for ob in container.findSources(
... app['ob4'], maxDepth=None))
['ob0', 'ob1', 'ob2']
>>> sorted(ob.id for ob in container.findSources(
... app['ob4'], maxDepth=None, filter=ISpecialInterface.providedBy))
['ob0', 'ob1', 'ob2']
>>> sorted(ob.id for ob in container.findSources(
... app['ob5'], maxDepth=None))
['ob0', 'ob1', 'ob2']
>>> list(ob.id for ob in container.findSources(
... app['ob5'], filter=ISpecialInterface.providedBy))
[]findRelationships>>> len(list(container.findRelationships(
... app['ob0'], app['ob4'], maxDepth=None)))
2
>>> len(list(container.findRelationships(
... app['ob0'], app['ob4'], maxDepth=None,
... filter=ISpecialInterface.providedBy)))
1
>>> len(list(container.findRelationships(app['ob0'])))
3
>>> len(list(container.findRelationships(
... app['ob0'], filter=ISpecialInterface.providedBy)))
2isLinked>>> container.isLinked(app['ob0'], app['ob5'], maxDepth=None)
True
>>> container.isLinked(
... app['ob0'], app['ob5'], maxDepth=None,
... filter=ISpecialInterface.providedBy)
False
>>> container.isLinked(
... app['ob0'], app['ob2'], maxDepth=None,
... filter=ISpecialInterface.providedBy)
True
>>> container.isLinked(
... app['ob0'], app['ob4'])
True
>>> container.isLinked(
... app['ob0'], app['ob4'],
... filter=ISpecialInterface.providedBy)
FalseMultiple Sources and/or Targets; Duplicate RelationshipsRelationships are not always between a single source and a single target. Many
approaches to this are possible, but a simple one is to allow relationships to
have multiple sources and multiple targets. This is an approach that the
relationship container supports.>>> container.add(Relationship(
... (app['ob2'], app['ob4'], app['ob5'], app['ob6'], app['ob7']),
... (app['ob1'], app['ob4'], app['ob8'], app['ob9'], app['ob10'])))
>>> container.add(Relationship(
... (app['ob10'], app['ob0']),
... (app['ob7'], app['ob3'])))Before we examine the results, look at those for a second.Among the interesting items is that we have duplicated the ob2->ob4
relationship in the first example, and duplicated the ob0->ob3 relationship
in the second. The relationship container does not limit duplicate
relationships: it simply adds and indexes them, and will include the additional
relationship path in findRelationships.>>> sorted(o.id for o in container.findTargets(app['ob4']))
['ob1', 'ob10', 'ob4', 'ob8', 'ob9']
>>> sorted(o.id for o in container.findTargets(app['ob10']))
['ob3', 'ob7']
>>> sorted(o.id for o in container.findTargets(app['ob4'], maxDepth=2))
['ob1', 'ob10', 'ob2', 'ob3', 'ob4', 'ob7', 'ob8', 'ob9']
>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... app['ob2'], app['ob4']))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from
(<Demo ob2>, <Demo ob4>, <Demo ob5>, <Demo ob6>, <Demo ob7>)
to
(<Demo ob1>, <Demo ob4>, <Demo ob8>, <Demo ob9>, <Demo ob10>)>'],
['<Relationship from (<Demo ob2>,) to (<Demo ob4>,)>']]There’s also a reflexive relationship in there, with ob4 pointing to ob4. It’s
marked as a cycle.>>> list(container.findRelationships(app['ob4'], app['ob4']))
... # doctest: +NORMALIZE_WHITESPACE
[cycle(<Relationship from
(<Demo ob2>, <Demo ob4>, <Demo ob5>, <Demo ob6>, <Demo ob7>)
to
(<Demo ob1>, <Demo ob4>, <Demo ob8>, <Demo ob9>, <Demo ob10>)>,)]
>>> list(container.findRelationships(app['ob4'], app['ob4']))[0].cycled
[{'source': <Demo ob4>}]Relating Relationships and Relationship ContainersRelationships are objects. We’ve already shown and discussed how this means
that they can implement different interfaces and be annotated. It also means
that relationships are first-class objects that can be related themselves.
This allows relationships that keep track of who created other relationships,
and other use cases.Even the relationship containers themselves can be nodes in a relationship
container.>>> container1 = app['container1'] = Container()
>>> container2 = app['container2'] = Container()
>>> rel = Relationship((container1,), (container2,))
>>> container.add(rel)
>>> container.isLinked(container1, container2)
TrueExposing Unresolved TokensFor specialized use cases, usually optimizations, sometimes it is useful to
have access to raw results from a given implementation. For instance, if a
relationship has many members, it might make sense to have an intid-based
relationship container return the actual intids.The containers include three methods for these sorts of use cases:findTargetTokens,findSourceTokens, andfindRelationshipTokens. They
take the same arguments as their similarly-named cousins.Convenience classesThree convenience classes exist for relationships with a single source and/or a
single target only.One-To-One RelationshipAOneToOneRelationshiprelates a single source to a single target.>>> from zc.relationship.shared import OneToOneRelationship
>>> rel = OneToOneRelationship(app['ob20'], app['ob21'])>>> verifyObject(interfaces.IOneToOneRelationship, rel)
TrueAll container methods work as for the general many-to-many relationship. We
repeat some of the tests defined in the main section above (all relationships
defined there are actually one-to-one relationships).>>> container.add(rel)
>>> container.add(OneToOneRelationship(app['ob21'], app['ob22']))
>>> container.add(OneToOneRelationship(app['ob21'], app['ob23']))
>>> container.add(OneToOneRelationship(app['ob20'], app['ob23']))
>>> container.add(OneToOneRelationship(app['ob20'], app['ob24']))
>>> container.add(OneToOneRelationship(app['ob22'], app['ob25']))
>>> rel = OneToOneRelationship(app['ob25'], app['ob21'])
>>> container.add(rel)findTargets>>> sorted(o.id for o in container.findTargets(app['ob20'], 2))
['ob21', 'ob22', 'ob23', 'ob24']findSources>>> sorted(o.id for o in container.findSources(app['ob21'], 2))
['ob20', 'ob22', 'ob25']findRelationships>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(app['ob21'], maxDepth=2))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob21>,) to (<Demo ob22>,)>'],
['<Relationship from (<Demo ob21>,) to (<Demo ob22>,)>',
'<Relationship from (<Demo ob22>,) to (<Demo ob25>,)>'],
['<Relationship from (<Demo ob21>,) to (<Demo ob23>,)>']]>>> sorted(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... target=app['ob23'], maxDepth=2))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob20>,) to (<Demo ob21>,)>',
'<Relationship from (<Demo ob21>,) to (<Demo ob23>,)>'],
['<Relationship from (<Demo ob20>,) to (<Demo ob23>,)>'],
['<Relationship from (<Demo ob21>,) to (<Demo ob23>,)>'],
['<Relationship from (<Demo ob25>,) to (<Demo ob21>,)>',
'<Relationship from (<Demo ob21>,) to (<Demo ob23>,)>']]>>> list(container.findRelationships(
... app['ob20'], app['ob25'], maxDepth=None))
... # doctest: +NORMALIZE_WHITESPACE
[(<Relationship from (<Demo ob20>,) to (<Demo ob21>,)>,
<Relationship from (<Demo ob21>,) to (<Demo ob22>,)>,
<Relationship from (<Demo ob22>,) to (<Demo ob25>,)>)]>>> list(
... [repr(rel) for rel in path]
... for path in container.findRelationships(
... app['ob20'], maxDepth=None)
... if interfaces.ICircularRelationshipPath.providedBy(path))
... # doctest: +NORMALIZE_WHITESPACE
[['<Relationship from (<Demo ob20>,) to (<Demo ob21>,)>',
'<Relationship from (<Demo ob21>,) to (<Demo ob22>,)>',
'<Relationship from (<Demo ob22>,) to (<Demo ob25>,)>',
'<Relationship from (<Demo ob25>,) to (<Demo ob21>,)>']]isLinked>>> container.isLinked(source=app['ob20'])
True
>>> container.isLinked(target=app['ob24'])
True
>>> container.isLinked(source=app['ob24'])
False
>>> container.isLinked(target=app['ob20'])
False
>>> container.isLinked(app['ob20'], app['ob22'], maxDepth=2)
True
>>> container.isLinked(app['ob20'], app['ob25'], maxDepth=2)
Falseremove>>> res = list(container.findTargets(app['ob22'], None)) # before removal
>>> res[:2]
[<Demo ob25>, <Demo ob21>]
>>> container.remove(rel)
>>> list(container.findTargets(app['ob22'], None)) # after removal
[<Demo ob25>]reindex>>> rel = next(
... iter(container.findRelationships(app['ob21'], app['ob23']))
... )[0]>>> rel.target
<Demo ob23>
>>> rel.target = app['ob24'] # this calls reindex
>>> rel.target
<Demo ob24>>>> rel.source
<Demo ob21>
>>> rel.source = app['ob22'] # this calls reindex
>>> rel.source
<Demo ob22>ManyToOneRelationshipAManyToOneRelationshiprelates multiple sources to a single target.>>> from zc.relationship.shared import ManyToOneRelationship
>>> rel = ManyToOneRelationship((app['ob22'], app['ob26']), app['ob24'])>>> verifyObject(interfaces.IManyToOneRelationship, rel)
True>>> container.add(rel)
>>> container.add(ManyToOneRelationship(
... (app['ob26'], app['ob23']),
... app['ob20']))The relationship diagram now looks like this:ob20 (ob22, obj26) (ob26, obj23)
| |\ | |
ob21 | | obj24 obj20
| | |
ob22 | ob23
| \ |
ob25 ob24We created a cycle for obj20 via obj23.>>> sorted(o.id for o in container.findSources(app['ob24'], None))
['ob20', 'ob21', 'ob22', 'ob23', 'ob26']>>> sorted(o.id for o in container.findSources(app['ob20'], None))
['ob20', 'ob23', 'ob26']>>> list(container.findRelationships(app['ob20'], app['ob20'], None))
... # doctest: +NORMALIZE_WHITESPACE
[cycle(<Relationship from (<Demo ob20>,) to (<Demo ob23>,)>,
<Relationship from (<Demo ob26>, <Demo ob23>) to (<Demo ob20>,)>)]
>>> list(container.findRelationships(
... app['ob20'], app['ob20'], 2))[0].cycled
[{'source': <Demo ob20>}]TheManyToOneRelationship’ssourcesattribute is mutable, while it’stargetsattribute is immutable.>>> rel.sources
(<Demo ob22>, <Demo ob26>)
>>> rel.sources = [app['ob26'], app['ob24']]>>> rel.targets
(<Demo ob24>,)
>>> rel.targets = (app['ob22'],)
Traceback (most recent call last):
...
AttributeError: can't set attributeBut the relationship has an additional mutabletargetattribute.>>> rel.target
<Demo ob24>
>>> rel.target = app['ob22']OneToManyRelationshipAOneToManyRelationshiprelates a single source to multiple targets.>>> from zc.relationship.shared import OneToManyRelationship
>>> rel = OneToManyRelationship(app['ob22'], (app['ob20'], app['ob27']))>>> verifyObject(interfaces.IOneToManyRelationship, rel)
True>>> container.add(rel)
>>> container.add(OneToManyRelationship(
... app['ob20'],
... (app['ob23'], app['ob28'])))The updated diagram looks like this:ob20 (ob26, obj24) (ob26, obj23)
| |\ | |
ob21 | | obj22 obj20
| | | | |
ob22 | ob23 (ob20, obj27) (ob23, obj28)
| \ |
ob25 ob24Alltogether there are now three cycles for ob22.>>> sorted(o.id for o in container.findTargets(app['ob22']))
['ob20', 'ob24', 'ob25', 'ob27']
>>> sorted(o.id for o in container.findTargets(app['ob22'], None))
['ob20', 'ob21', 'ob22', 'ob23', 'ob24', 'ob25', 'ob27', 'ob28']>>> sorted(o.id for o in container.findTargets(app['ob20']))
['ob21', 'ob23', 'ob24', 'ob28']
>>> sorted(o.id for o in container.findTargets(app['ob20'], None))
['ob20', 'ob21', 'ob22', 'ob23', 'ob24', 'ob25', 'ob27', 'ob28']>>> sorted(repr(c) for c in
... container.findRelationships(app['ob22'], app['ob22'], None))
... # doctest: +NORMALIZE_WHITESPACE
['cycle(<Relationship from (<Demo ob22>,) to (<Demo ob20>, <Demo ob27>)>,
<Relationship from (<Demo ob20>,) to (<Demo ob21>,)>,
<Relationship from (<Demo ob21>,) to (<Demo ob22>,)>)',
'cycle(<Relationship from (<Demo ob22>,) to (<Demo ob20>, <Demo ob27>)>,
<Relationship from (<Demo ob20>,) to (<Demo ob24>,)>,
<Relationship from (<Demo ob26>, <Demo ob24>) to (<Demo ob22>,)>)',
'cycle(<Relationship from (<Demo ob22>,) to (<Demo ob24>,)>,
<Relationship from (<Demo ob26>, <Demo ob24>) to (<Demo ob22>,)>)']TheOneToManyRelationship’stargetsattribute is mutable, while it’ssourcesattribute is immutable.>>> rel.targets
(<Demo ob20>, <Demo ob27>)
>>> rel.targets = [app['ob28'], app['ob21']]>>> rel.sources
(<Demo ob22>,)
>>> rel.sources = (app['ob23'],)
Traceback (most recent call last):
...
AttributeError: can't set attributeBut the relationship has an additional mutablesourceattribute.>>> rel.source
<Demo ob22>
>>> rel.target = app['ob23']Changes2.1 (2021-03-22)Add support for Python 3.7 up to 3.9.Update tozope.component >= 5.2.0.post1 (2018-06-19)Fix PyPI page by using correct ReST syntax.2.0 (2018-06-19)The 2.x line is almost completely compatible with the 1.x line.
The one notable incompatibility does not affect the use of relationship
containers and is small enough that it will hopefully affect noone.New Requirementszc.relationIncompatibilities with 1.0findRelationshipswill now use the defaultTransitiveQueriesFactory if it
is set. SetmaxDepthto 1 if you do not want this behavior.Some instantiation exceptions have different error messages.Changes in 2.0the relationship index code has been moved out to zc.relation and
significantly refactored there. A fully backwards compatible subclass
remains in zc.relationship.indexsupport both 64-bit and 32-bit BTree familiessupport specifying indexed values by passing callables rather than
interface elements (which are also still supported).in findValues and findValueTokens,queryargument is now optional. If
the query evaluates to False in a boolean context, all values, or value
tokens, are returned. Value tokens are explicitly returned using the
underlying BTree storage. This can then be used directly for other BTree
operations.In these and other cases, you should not ever mutate returned results!
They may be internal data structures (and are intended to be so, so
that they can be used for efficient set operations for other uses).
The interfaces hopefully clarify what calls will return an internal
data structure.README has a new beginning, which both demonstrates some of the new features
and tries to be a bit simpler than the later sections.findRelationshipsand new methodfindRelationshipTokenscan find
relationships transitively and intransitively.findRelationshipTokenswhen used intransitively repeats the behavior offindRelationshipTokenSet.
(findRelationshipTokenSetremains in the API, not deprecated, a companion
tofindValueTokenSet.)100% test coverage (per the usual misleading line analysis :-) of index
module. (Note that the significantly lower test coverage of the container
code is unlikely to change without contributions: I use the index
exclusively. See plone.relations for a zc.relationship container with
very good test coverage.)Tested with Python 2.7 and Python >= 3.5Added test extra to declare test dependency onzope.app.folder.Branch 1.1(supports Zope 3.4/Zope 2.11/ZODB 3.8)1.1.0adjust to BTrees changes in ZODB 3.8 (thanks Juergen Kartnaller)converted buildout to rely exclusively on eggsBranch 1.0(supports Zope 3.3/Zope 2.10/ZODB 3.7)1.0.2Incorporated tests and bug fixes to relationship containers from
Markus Kemmerling:ManyToOneRelationship instantiation was brokenThefindRelationshipsmethod misbehaved if both,sourceandtarget,
are not None, butbool(target)evaluated to False.ISourceRelationship and ITargetRelationship had errors.1.0.1Incorporated test and bug fix from Gabriel Shaar:if the target parameter is a container with no objects, then
`shared.AbstractContainer.isLinked` resolves to False in a bool context and
tokenization fails. `target and tokenize({'target': target})` returns the
target instead of the result of the tokenize function.Made README.rst tests pass on hopefully wider set of machines (this was a
test improvement; the relationship index did not have the fragility).
Reported by Gabriel Shaar.1.0.0Initial release |
zc.reloadmonitor | zc.reloadmonitorzc.reloadmonitor provides a plug-in for zc.monitor. It allows you to
cause already imported modules to be reloaded.To use, just connect to the monitor port and give the command:reload my.moduleTo configure/enable from Python, use:import zc.reloadmodule
zc.reloadmonitor.configure()To configure from ZCML, use:<include package="zc.reloadmonitor" />Changes0.3.0 (2010-10-07)Fixed setup.py so thatconfigure.zcmlgets properly included on install.0.2.0 (2010-10-07)Make the reload monitor compatible with zope.interface 3.5.0.1.0 (2010-09-03)Initial release |
zc.resourcelibrary | The resource library is a Zope 3 extension that is designed to make the
inclusion of JavaScript, CSS, and other resources easy, cache-friendly,
and component-friendly.ContentsResource LibraryLoading FilesUsageContent-type checkingDependenciesError ConditionsMultiple HeadsError during publishingCustom “directory” factoriesLibrary insertion place markerFuture WorkCHANGES2.1.0 (2018-10-19)2.0.0 (2017-05-23)1.3.4 (2012-01-20)1.3.2 (2010-08-16)1.3.1 (2010-03-24)1.3.0 (2009-10-08)1.2.0 (2009-06-04)1.1.0 (2009-05-05)1.0.2 (2009-01-27)1.0.1 (2008-03-07)1.0.0 (2008-02-17)0.8.2 (2007-12-07)0.8.1 (2007-12-05)0.8 (2007-12-04)0.6.1 (2007-11-03)0.6.0 (2006-09-22)0.5.2 (2006-06-15)0.5.1 (2006-06-06)0.5.0 (2006-04-24)Resource LibraryThe resource library is designed to make the inclusion of JavaScript, CSS, and
other resources easy, cache-friendly, and component-friendly. For instance, if
two widgets on a page need the same JavaScript library, the library should be
only loaded once, but the widget designers should not have to concern
themselves with the presence of other widgets.Imagine that one widget has a copy of a fictional Javascript library. To
configure that library as available use ZCML like this:>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
... <resourceLibrary name="some-library">
... <directory source="tests/example"/>
... </resourceLibrary>
...
... </configure>
... """)This is exactly equivalent to a resourceDirectory tag, with no additional
effect.Loading FilesIt is also possible to indicate that one or more Javascript or CSS files should
be included (by reference) into the HTML of a page that needs the library.
This is the current difference between resourceLibrary and resourceDirectory.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
... <resourceLibrary name="my-lib">
... <directory
... source="tests/example/my-lib"
... include="included.js included.css included.kss"
... />
... </resourceLibrary>
...
... </configure>
... """)If a file is included that the resource library doesn’t understand (i.e. it
isn’t Javascript or CSS), an exception will occur.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
... <resourceLibrary name="bad-lib">
... <directory
... source="tests/example/my-lib"
... include="included.bad"
... />
... </resourceLibrary>
...
... </configure>
... """)
Traceback (most recent call last):
...
ConfigurationError: Resource library doesn't know how to include this file: "included.bad".
File...UsageComponents signal their need for a particular resource library (Javascript or
otherwise) by using a special TAL expression. (The use of replace is not
mandated, the result may be assigned to a dummy variable, or otherwise
ignored.)>>> zpt('<tal:block replace="resource_library:my-lib"/>')We’ll be using a testbrowser.Browser to simulate a user viewing web pages.>>> from zope.testbrowser.wsgi import Browser
>>> browser = Browser()
>>> browser.addHeader('Authorization', 'Basic mgr:mgrpw')
>>> browser.handleErrors = FalseWhen a page is requested that does not need any resource libraries, the HTML
will be untouched.>>> browser.open('http://localhost/zc.resourcelibrary.test_template_1')
>>> browser.contents
'...<head></head>...'When a page is requested that uses a component that needs a resource library,
the library will be referenced in the rendered page.>>> browser.open('http://localhost/zc.resourcelibrary.test_template_2')A reference to the JavaScript is inserted into the HTML.>>> '/@@/my-lib/included.js' in browser.contents
TrueAnd the JavaScript is available from the URL referenced.>>> browser.open('/@@/my-lib/included.js')
>>> browser.headers['Content-Type']
'application/javascript'
>>> print(browser.contents.decode('ascii'))
function be_annoying() {
alert('Hi there!');
}For inclusion of resources the full base url with namespaces is used.>>> browser.open('http://localhost/++skin++Basic/zc.resourcelibrary.test_template_2')
>>> print(browser.contents)
<html...
src="http://localhost/++skin++Basic/@@/my-lib/included.js"...
</html>A reference to the CSS is also inserted into the HTML.>>> browser.open('http://localhost/zc.resourcelibrary.test_template_2')
>>> '/@@/my-lib/included.css' in browser.contents
TrueAnd the CSS is available from the URL referenced.>>> browser.open('/@@/my-lib/included.css')
>>> browser.headers['Content-Type']
'text/css'
>>> print(browser.contents.decode('ascii'))
div .border {
border: 1px silid black;
}A reference to an unknown library causes an exception.>>> browser.open('http://localhost/zc.resourcelibrary.test_template_3')
Traceback (most recent call last):
...
RuntimeError: Unknown resource library: "does-not-exist"Library usage may also be signaled programattically. For example, if a page
would not otherwise include a resource library…>>> page = ('<html><head></head>'
... '<body tal:define="unused view/doSomething">'
... 'This is the body.</body>')>>> class View(object):
... context = getRootFolder()
... def doSomething(self):
... pass>>> zpt(page, view=View())
'...<head></head>...'If we then programmatically indicate that a resource library is needed, it will
be included.>>> import zc.resourcelibrary
>>> class View(object):
... context = getRootFolder()
... def doSomething(self):
... zc.resourcelibrary.need('my-lib')>>> '/@@/my-lib/included.js' in zpt(page, view=View())
TrueContent-type checkingResources should be referenced only from HTML and XML content, other content
types should not be touched by the resource library:>>> page = ('<html><head>'
... '<tal:block replace="resource_library:my-lib"/>'
... '</head><body></body></html>')>>> '/@@/my-lib/included.js' in zpt(page, content_type='text/html')
True>>> '/@@/my-lib/included.js' in zpt(page, content_type='text/xml')
True>>> '/@@/my-lib/included.js' in zpt(page, content_type='text/none')
FalseThis also works if the content type contains uppercase characters, as per RfC
2045 on the syntax of MIME type specifications (we can’t test uppercase
characters in the major type yet since the publisher is not completely up to
the RfC on that detail yet):>>> '/@@/my-lib/included.js' in zpt(page, content_type='text/hTMl')
True>>> '/@@/my-lib/included.js' in zpt(page, content_type='text/nOne')
FalseParameters to the content type can’t fool the check either:>>> '/@@/my-lib/included.js' in zpt(
... page, content_type='text/xml; charset=utf-8')
True>>> '/@@/my-lib/included.js' in zpt(
... page, content_type='text/none; charset=utf-8')
FalseThe content type is, however, assumed to be a strictly valid MIME type
specification, implying that it can’t contain any whitespace up to the
semicolon signalling the start of parameters, if any (we can’t test whitespace
around the major type as that would already upset the publisher):>>> '/@@/my-lib/included.js' in zpt(
... page, content_type='text/ xml')
False>>> '/@@/my-lib/included.js' in zpt(
... page, content_type='text/xml ; charset=utf-8')
FalseThe content type may also be None if it was never set, which of course doesn’t
count as HTML or XML either:>>> from zc.resourcelibrary import publication
>>> from io import BytesIO
>>> request = publication.Request(body_instream=BytesIO(), environ={})
>>> request.response.setResult("This is not HTML text.")
>>> b'/@@/my-lib/included.js' in request.response.consumeBody()
FalseDependenciesIf a resource library registers a dependency on another library, the dependency
must be satisfied or an error will be generated.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
...
... <resourceLibrary name="dependent-but-unsatisfied" require="not-here">
... <directory source="tests/example"/>
... </resourceLibrary>
...
... </configure>
... """)
Traceback (most recent call last):
...
ConfigurationError:...Resource library "dependent-but-unsatisfied" has unsatisfied dependency on "not-here"...
...When the dependencies are satisfied, the registrations will succeed.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
...
... <resourceLibrary name="dependent" require="dependency">
... <directory source="tests/example" include="1.js"/>
... </resourceLibrary>
...
... <resourceLibrary name="dependency">
... <directory source="tests/example" include="2.css"/>
... </resourceLibrary>
...
... </configure>
... """)If one library depends on another and the first library is referenced on a
page, the second library will also be included in the rendered HTML.>>> zpt('<tal:block replace="resource_library:dependent"/>')
>>> browser.open('http://localhost/zc.resourcelibrary.test_template_4')
>>> '/@@/dependent/1.js' in browser.contents
True
>>> '/@@/dependency/2.css' in browser.contents
TrueOrder matters, espacially for js files, so the dependency should
appear before the dependent library in the page>>> print(browser.contents.strip())
<html>...dependency/2.css...dependent/1.js...</html>It is possible for a resource library to only register a list of dependencies
and not specify any resources.When such a library is used in a resource_library statement in a template,
only its dependencies are referenced in the final rendered page.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
...
... <resourceLibrary name="only_require" require="my-lib dependent"/>
...
... </configure>
... """)
>>> zpt('<tal:block replace="resource_library:only_require"/>')
>>> browser.open('http://localhost/zc.resourcelibrary.test_template_7')
>>> '/@@/my-lib/included.js' in browser.contents
True
>>> '/@@/my-lib/included.css' in browser.contents
True
>>> '/@@/dependent/1.js' in browser.contents
True
>>> '/@@/dependency/2.css' in browser.contents
True
>>> '/@@/only_require' in browser.contents
FalseError ConditionsErrors are reported if you do something wrong.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
...
... <resourceLibrary name="some-library">
... <directory source="does-not-exist"/>
... </resourceLibrary>
...
... </configure>
... """)
Traceback (most recent call last):
...
ConfigurationError: Directory u'...does-not-exist' does not exist
File...Multiple HeadsOn occasion the body of an HTML document may contain the text “<head>”. In
those cases, only the actual head tag should be manipulated. The first
occurrence of “<head>” has the script tag inserted…>>> browser.open('http://localhost/zc.resourcelibrary.test_template_5')
>>> print(browser.contents)
<html>...<head> <script src="http://localhost/@@/my-lib/included.js"...…but that is the only time it is inserted.>>> browser.contents.count('src="http://localhost/@@/my-lib/included.js"')
1Error during publishingNote that in case an exception is raised during publishing, the
resource library is disabled.>>> browser.handleErrors = True
>>> browser.post(
... 'http://localhost/zc.resourcelibrary.test_template_5',
... 'value:int=dummy', 'multipart/form-data')
Traceback (most recent call last):
...
urllib.error.HTTPError: ...
>>> '/@@/my-lib/included.js' in browser.contents
FalseCustom “directory” factoriesBy default, a resource directory is created when a directory directive
is used. You can add a factory option to specify a different
resource-directory factory. This can be used, for example, to provide
dynamic resources.>>> zcml("""
... <configure
... xmlns="http://namespaces.zope.org/zope"
... package="zc.resourcelibrary">
... <include package="." file="meta.zcml" />
...
... <resourceLibrary name="my-lib">
... <directory
... source="tests/example/my-lib"
... include="foo.js"
... factory="zc.resourcelibrary.tests.tests.TestFactory"
... />
... </resourceLibrary>
...
... </configure>
... """, clear=['my-lib'])The factory will be called with a source directory, a security checker
and a name. We’ve created a class that implements a resource
directory dynamically.>>> browser.open('http://localhost/zc.resourcelibrary.test_template_2')
>>> '/@@/my-lib/foo.js' in browser.contents
True>>> browser.open('http://localhost/@@/my-lib/foo.js')
>>> print(browser.contents)
foo = 1;Library insertion place markerYou can explicitly mark where to insert HTML. Do do that, add the
special comment “<!– zc.resourcelibrary –>” (exact string, w/o quotes)
to the template. It will be replaced by resource libraries HTML on
processing.>>> browser.open('http://localhost/zc.resourcelibrary.test_template_6')A reference to the JavaScript is inserted into the HTML.>>> print(browser.contents)
<html>
<head>
<title>Marker test</title>
<BLANKLINE>
<!-- Libraries will be included below -->
<script src="http://localhost/@@/my-lib/foo.js"
type="text/javascript">
</script>
</head>
...
</html>Future WorkWe want to be able to specify a single file to add to the resource.We may want to be able to override a file in the resource with a different
file.Currently only one <directory> tag is allowed per-library. If multiple tags
are allowed, should they be merged or have distinct prefixes?Add a test to ensure that files are only included once, and in the proper
orderCHANGES2.1.0 (2018-10-19)Add support for Python 3.7.2.0.0 (2017-05-23)Add support for Python 3.4, 3.5, 3.6 and PyPy.Drop test dependency onzope.app.testingandzope.app.zcmlfiles, among others.Make zope.app.publication dependency optional.1.3.4 (2012-01-20)Register adapters with getSiteManager rather than getGlobalSiteManager. This
allows registering resource libraries in non-global sites. For detais see:https://mail.zope.org/pipermail/zope-dev/2010-March/039657.htmlhttp://docs.pylonsproject.org/projects/pyramid_zcml/en/latest/narr.html#using-broken-zcml-directivesRaise NotImplementedError if we find that a second ZCML declaration would
change the global library_info dict in a way that may (depending on ZCML
ordering) break applications at runtime. These errors were pretty hard to
debug.Remove unneeded test dependencies onzope.app.authenticationandzope.app.securitypolicy.Remove dependency onzope.app.pagetemplate.1.3.2 (2010-08-16)Response._addDependencies will only include a ResourceLibrary in the
list of dependencies if the ResourceLibrary actually has included
resources.This makes directives that simply declare dependencies on other
libraries work again.Add missing depedency onzope.app.pagetemplate, clean up unused
imports and whitespace.1.3.1 (2010-03-24)Resource libraries that are required during a retried request are now
correctly registered and injected to the HTML.Import hooks functionality from zope.component after it was moved there from
zope.site. This lifts the dependency on zope.site.Removed an unused ISite import and thereby, the undeclared dependency on
zope.location.1.3.0 (2009-10-08)Usezope.browserresourceinstead ofzope.app.publisher, removing
a dependency on latter.Look up the “resources view” via queryMultiAdapter instead of looking into
the adapter registry.Moved the dependency on zope.site to the test dependencies.1.2.0 (2009-06-04)Usezope.siteinstead ofzope.app.component. Removes direct
dependency onzope.app.component.1.1.0 (2009-05-05)New features:An attempt to generate resource URLs using the “resources view” (@@)
is now made; if unsuccesful, we fall back to the previous method of
crafting the URL by hand from the site url. This ensures that the
resource library respects the existing plugging points for resource
publishing (seezope.app.publisher.browser.resources).You can now explicitly specify where resource links should be
inserted using the special marker comment ‘<!– zc.resourcelibrary –>’.1.0.2 (2009-01-27)Remove zope.app.zapi from dependencies, substituting
its uses with direct imports.Use zope-dev at zope.org mailing list address instead of
zope3-dev at zope.org as the latter one is retired.Change “cheeseshop” to “pypi” in the package homepage.1.0.1 (2008-03-07)Bugs fixed:added the behavior from the standard Zope 3 response to guess that a body
that is not HTML without an explicit mimetype should have a
‘text/plain’ mimetype. This means that, for instance, redirects with
a body of ‘’ and no explicit content type will no longer cause an
exception in the resourcelibrary response code.1.0.0 (2008-02-17)New features:You can now provide an alternative “directory-resource”
factory. This facilitates implementation of dynamic resources.Bugs fixed:Updated the functional-testing zcml file to get rid of a deprecation
warning.0.8.2 (2007-12-07)bug fix: when checking content type, take into account that it may be None0.8.1 (2007-12-05)changed MIME type handling to be more restrictive about whitespace to
conform to RfC 20450.8 (2007-12-04)fixed the check for HTML and XML content to allow content type parameters0.6.1 (2007-11-03)Update package meta-data.Fixed package dependencies.Merged functional and unit tests.0.6.0 (2006-09-22)???0.5.2 (2006-06-15)Add more package meta-data.0.5.1 (2006-06-06)Update package code to work with newer versions of other packages.0.5.0 (2006-04-24)Initial release. |
zc.resumelb | This package provides a load balancer for WSGI applications that sorts
requests into request classes and assigns requests of a given class to
the same workers.The load balancer can benefit you if you have an application that:has too much load (or is too slow) to be handled by a single
process,has a working set that is too large to fit in the caches
used by your process, andthere is a way to classify requests so that there is little overlap
in the working sets of the various classes.If what’s above applies to you (or if you’re curious), read on.ContentsArchitectureStatusRequest ClassificationDeploymentBasic deploymentBasic worker deploymentBasic load-balancer deploymentBasic ExampleZooKeeper-based deploymentZooKeeoper-based worker deploymentZooKeeper-based load-balancer deploymentZooKeeper ExampleChange History1.0.2 (2015-03-11)1.0.1 (2015-03-03)1.0.0 (2015-02-19)0.7.5 (2014-11-18)0.7.4 (2014-10-29)0.7.3 (2014-06-04)0.7.2 (2014-06-02)0.7.1 (2012-10-17)0.7.0 (2012-07-05)0.6.2 (2012-06-15)0.6.0 (2012-05-11)0.5.2 (2012-05-09)0.5.1 (2012-05-07)0.5.0 (2012-05-03)0.4.0 (2012-04-27)0.3.0 (2012-03-28)0.2.0 (2012-03-27)0.1.0 (2012-03-09)ArchitectureAn application deployed using the load balancer consistes of one or
more load balancers, and multiple workers. Web requests come into the
load balancers, are converted to WSGI environments and requests, in
environment form, are handed over to workers over long-lived
multi-plexed connections.Workers compute résumés, which are dictionaries mapping request
classes to scores, which are average requests per second. Workers send
load balancers their résumés periodically, and when load balancers
connect to them.Multiple load balancers can be used for redundancy or load
distribution. Résumés are managed by workers to assure that load
balancer’s have the same information about worker skills.StatusThe current version of the load-balancer should be considered
experimental. We’re currently testing it in production.The documentation is a bit thin, but there are extensive doctests.Request ClassificationYou need to provide a request-classification function that takes a
WSGI environment and returns a request class string.Two classifiers are built-in:hostThe host classifier uses HTTP Host header values, normalized by
removing leading “www.” prefixes, if present.re_classifierA general classifier (factory) that applies a regular expression
with aclassgroup to an environment value.For example, to use the first step in a request URL path, you’d use
the following request-classifier option to one of the load-balancer
scripts described below:-r 'zc.resumelb.lb:re_classifier("PATH_INFO",r"/(?P<class>[^/]+)")'DeploymentDeploying the load balancer requires deploying each of the workers,
and deploying the load balancer(s) itself. The workers are deployed much
like any WSGI stack. The workers serve as WSGI servers, even though
they don’t accept HTTP requests directly.There are two built-in strategies for deploying applications,
depending on whether you’re willing to drink some ZooKeeper kool-aid.If you use ZooKeeper, workers can bind to ephemeral ports and register
them with ZooKeeper. The load balancer monitors ZooKeeper and adds
and removes workers to it’s pool as worker processes are worker
processes are started and stopped.Basic deploymentThe basic deployment is the easiest to get up and running quickly.Basic worker deploymentIn the basic deployment, you deploy each worker as you would any WSGI
application. A Paste Deployment server runner is provided by thepaste.server_runnermainentry point. The runner accepts
parameters:use egg:zc.resumelbThis selects the basic worker runner.address HOST:PORTThe address to listen on, in the form HOST:PORThistory SIZERoughly, the number of requests to consider when computing a
worker’s résumé. This defaults to 9999.max_skill_age SIZEThe maximum number of requests without a request in a request class
before a request class is dropped from a worker’s résumé.If not specified, this defaults to 10 times the history.threads NTHREADSIf specified with a number greater than zero, then a thread pool of
the given size is used to call the underlying WSGI stack.resume_file PATHThe path to a résumé file. Periodically, the worker’s résumé is
saved to this file and the file is read on startup to initialize
the worker’s résumé.tracelog LOGGERRequest trace logging and specify the name of the Python logger to
use.Basic load-balancer deploymentThe load balancer is a program that should be run with a daemonizer,
like zdaemon, or supervisor. It get’s it’s configuration by way of
command-line arguments. Run it with-hto get a list of options.The basic load-balancer is provided by theresumelbscript
provided by the package.Basic ExampleHere’s a samplepaste.inifile defining a WSGI stack:[app:main]
use = egg:bobo
bobo_resources = zc.resumelb.tests
[server:main]
use = egg:zc.resumelb
address = 127.0.0.1:8000And here’s a load-balancer command you’d use with this worker:resumelb -LINFO -a :8080 127.0.0.1:8000In this example, the load balancer listens on port 8080 and connects
to the worker on port 8000.ZooKeeper-based deploymentIn a ZooKeeper-based deployment, workers register with ZooKeeper and
the load balancer gets worker addresses from ZooKeeper. As workers are
started and stopped, they’re automatically added to and removed from
the load-balancer pool. In addition, most configuration parameters are
read from ZooKeeper and are updated at run time when they are changed
in ZooKeeper. To learn more about ZooKeeper and how to build and
maintain a ZooKeeper tree, seehttp://pypi.python.org/pypi/zc.zk.ZooKeeoper-based worker deploymentAs with the basic deployment, you deploy ZooKeeoper-based workers as
servers in a WSGI stack. A Paste Deployment server runner is provided by thepaste.server_runnerzkentry point. The runner accepts
parameters:use egg:zc.resumelb#zkThis selects the ZooKeeoper-based worker runner.zookeeper CONNECTIONA ZooKeeoper connection string.path PATHThe path to a ZooKeeper node where the worker should get
configuration and register it’s address. The node should have aproviderssubnode where address is is published.address HOST:PORTThe address to listen on, in the form HOST:PORTthreads NTHREADSIf specified with a number greater than zero, then a thread pool of
the given size is used to call the underlying WSGI stack.resume_file PATHThe path to a résumé file. Periodically, the worker’s résumé is
saved to this file and the file is read on startup to initialize
the worker’s résumé.tracelog LOGGERRequest trace logging and specify the name of the Python logger to
use.ZooKeeper-based load-balancer deploymentThe load balancer is a program that should be run with a daemonizer,
like zdaemon, or supervisor. It get’s it’s configuration by way of
command-line arguments. Run it with-hto get a list of options.ZooKeeper ExampleHere’s a samplepaste.inifile defining a WSGI stack:[app:main]
use = egg:bobo
bobo_resources = zc.resumelb.tests
[server:main]
use = egg:zc.resumelb#zk
zookeeper = 127.0.0.1:2181
path = /lb/workersAnd here’s a load-balancer command you’d use with this worker:zkresumelb -LINFO 127.0.0.1:2181 /lbThe above example assumes you have a ZooKeeper server running on port
2181 and that it includes a tree that looks like:/lb
/providers
/workers
/providersSeehttp://pypi.python.org/pypi/zc.zkto learn more about building and
maintaining ZooKeeper trees.Change History1.0.2 (2015-03-11)Fixed: the nagios monitor metric for max request age showed -1 when
there were no outstanding requests. This was silly.Fixed a packaging bug.1.0.1 (2015-03-03)Fixed: uncaught applications exceptions were mishandled for HEAD
requests.Fixed: LB worker paths couldn’t be links in single-version mode, or
when using alternate pool implementations.1.0.0 (2015-02-19)Nagios monitoring plugin. See src/zc/resumelb/nagios.rst.You can now supply alternative pool implementations.Thanks to:https://github.com/zopefoundation/zc.resumelb/pull/3There’s a new pool implementationzc.resumelb.classlesspool.ClasslessPoolthat allocates work
solely based on backlogs, ignoring resumes. This is useful for
smaller applications that don’t have large resident sets or a good
way to segregate requests, but that can benefit from ZooKeeper-aware
load balancing.0.7.5 (2014-11-18)Fixed: Tracelogs didn’t include start and stop records.0.7.4 (2014-10-29)Fixed: Applications or middleware that didn’t call the WSGI
start_response function before returning an iterator weren’t handled
properly.Fixed: File-descriptors leaked when load balancers disconnected from
workers.0.7.3 (2014-06-04)Added some optimizations to reduce latency between load balancers
and workers.0.7.2 (2014-06-02)Added keep-alive messages from load balancers to workers to detect
workers that have gone away uncleanly.(Note that workers don’t have to be updated.)0.7.1 (2012-10-17)Fixed: When used with ZooKeeper, a load balancer could end up with
multiple connections to the same worker due to ZooKeeper
“flapping”. (ZooKeeper might report that workers had gone away and
come back without the workers actually going away.)Fixed: When using single-version mode, flapping between versions
could cause worker and book backlogs to be computed concorrectly,
causing assertion errors.In single-version mode, log version changes.0.7.0 (2012-07-05)Added support in the load balancer for applications that can’t have
multiple worker versions. You can upgrade workers
gradually. Workers with the new version will be ignored until
they’re in the majority, at which time the lb will stop using
workers with the old version.0.6.2 (2012-06-15)Fixed: a lack of socket timeout could cause requests to leak.0.6.0 (2012-05-11)Added a command-line script to fetch lb status data, assuming you’re
using the ZooKeeper-aware load-balancer script and have requested a
status server. (Also updated the status output to show request
start times as integer seconds.)0.5.2 (2012-05-09)Fixed: Temporary files created when buffering data in the load
balancers weren’t closed explicitly. Generally, they were closed
through garbage collection, but in certain situations, their numbers
could build quickly, leading to file-descriptor exhaustion.Fixed: Tracelog ‘I’ records didn’t always contain input length information.Fixed: Tracelog ‘I’ records were only included when using thread pools.0.5.1 (2012-05-07)Fixed: Worker resume data wasn’t initialized correctly when no
parameters are passed to the constructor and when reading a resume
file, causing resmes not not to update.Fixed: worker errors were written to standard out rather than being
logged.Fixed: Poorly-behaved WSGI applications that fail to catch errors
caused requests to hang rather than return 500 responses.0.5.0 (2012-05-03)Changed the way tracelog records are identified to reflect lb
request numbers. Records are disambiguated by including an lb
identifier as a prefix. For example “1.22” indicated request number
22 from lb 1.When defining workers that register with ZooKeeper, you can now
supply a description in the paste.ini file that shows up in
ZooKeeper. While the pid alone provides enough information to find
a worker, often a description (e.g. instance name or path) can make
it easier.0.4.0 (2012-04-27)Change the load-balancing algorithm to take backlogs of
underutilized workers into account to allow a lower variance
parameter to be used, which allows new workers to be better
utilized.Changed the load-balancing algorithm to try just a little bit harder
to keep work with skilled workers by not penalizing workers for
their first outstanding request. (In other words, when adjusting
worker scrores chacking a maximum backlog, we subtract 1 from the
worker’s backlog if it’s non-zero.The status server provided when using ZooKeeper now listens on a
unix-domain socket.The status server provided when using ZooKeeper now includes the
start time of the oldest request for each worker, to be used for
monitoring.Fixed: Workers buffered large request bodies in memory. Now large
request bodies are buffered to disk.Internal optimizations, especially writh regard to handling large
request and response bodies.0.3.0 (2012-03-28)Changed the way the zkresumelb (load-balancer program that works with
ZooKeeper) handles access logs. Now, you pass a Python logging logger
name. If you don’t pass anything, then nothing will be logged.0.2.0 (2012-03-27)There’s a new API for getting worker résumés, typically from
monitoring code:>>> import zc.resume.worker
>>> print zc.resume.worker.get_resume(addr)This is useful both for getting a worker’s résumé and for making
sure that the worker is accepting load-balancer connections.There’s also a scriot version of this:bin/get-worker-resume 192.168.24.60:33161When using ZooKeeper, you can request an lb status server. The
address gets registered with ZooKeeper. When you connect to it, you
get back a json string containing the overall lb backlog and
addresses and backlogs of each worker.The update settings methods were changed to revert settings to
default when not provided. This is especially important when used
with ZooKeeper, so you can look at a tree and know what settings are
without knowing the change history.Added graceful load-balancer and worker shutdown on SIGTERM.Fixed: trace log request ids weren’t assigned correctly when using
multiple load balancers.Added packaging meta data to help find gevent 1.0b1
(which is athttp://code.google.com/p/gevent/downloads/list)Updated the API for application trace logging to match that of
zc.zservertracelog, mainly to get database logging for ZTK
applications.0.1.0 (2012-03-09)Initial release. |
zcrm-python-cl | Python Client Library |
zcrmsdk | ZOHO CRM PYTHON SDKTable Of ContentsOverviewRegistering a Zoho ClientEnvironmental SetupIncluding the SDK in your projectPersistenceDataBase PersistenceFile PersistenceCustom PersistenceConfigurationInitializationClass HierarchyResponses And ExceptionsThreadingMultithreading in a Multi-User AppMulti-threading in a Single User AppSample CodeOverviewZoho CRM PYTHON SDK offers a way to create client Python applications that can be integrated with Zoho CRM.Registering a Zoho ClientSince Zoho CRM APIs are authenticated with OAuth2 standards, you should register your client app with Zoho. To register your app:Visit this pagehttps://api-console.zoho.comClick onADD CLIENT.Choose aClient Type.EnterClient Name,Client DomainorHomepage URLandAuthorized Redirect URIsthen clickCREATE.Your Client app would have been created and displayed by now.Select the created OAuth client.Generate grant token by providing the necessary scopes, time duration (the duration for which the generated token is valid) and Scope Description.Environmental SetupPython SDK is installable throughpip.pipis a tool for dependency management in Python. SDK expects the following from the client app.Client app must have Python(version 3 and above)Python SDK must be installed into client app throughpip.Including the SDK in your projectYou can include the SDK to your project using:InstallPythonfrompython.org(if not installed).InstallPython SDKNavigate to the workspace of your client app.Run the command below:pipinstallzcrmsdk==3.x.xThe Python SDK will be installed in your client application.Token PersistenceToken persistence refers to storing and utilizing the authentication tokens that are provided by Zoho. There are three ways provided by the SDK in which persistence can be utilized. They are DataBase Persistence, File Persistence and Custom Persistence.Table of ContentsDataBase PersistenceFile PersistenceCustom PersistenceImplementing OAuth PersistenceOnce the application is authorized, OAuth access and refresh tokens can be used for subsequent user data requests to Zoho CRM. Hence, they need to be persisted by the client app.The persistence is achieved by writing an implementation of the inbuilt Abstract Base ClassTokenStore, which has the following callback methods.get_token(self, user,token)- invoked before firing a request to fetch the saved tokens. This method should return implementation of inbuiltToken Classobject for the library to process it.save_token(self, user,token)- invoked after fetching access and refresh tokens from Zoho.delete_token(self,token)- invoked before saving the latest tokens.get_tokens(self)- The method to get the all the stored tokens.delete_tokens(self)- The method to delete all the stored tokens.Note:user is an instance of UserSignature Class.token is an instance of Token Class.DataBase PersistenceIn case the user prefers to use default DataBase persistence,MySQLcan be used.The database name should bezohooauth.There must be a table nameoauthtokenwith the following columns.id int(11)user_mail varchar(255)client_id varchar(255)refresh_token varchar(255)access_token varchar(255)grant_token varchar(255)expiry_time varchar(20)MySQL Querycreatetableoauthtoken(idint(11)notnullauto_increment,user_mailvarchar(255)notnull,client_idvarchar(255),refresh_tokenvarchar(255),access_tokenvarchar(255),grant_tokenvarchar(255),expiry_timevarchar(20),primarykey(id));altertableoauthtokenauto_increment=1;NoteThe Database persistence requires the following librariesmysql-connectormysql-connector-pythonCreate DBStore objectfromzcrmsdk.src.com.zoho.api.authenticator.storeimportDBStore"""DBStore takes the following parameters1 -> DataBase host name. Default value "localhost"2 -> DataBase name. Default value "zohooauth"3 -> DataBase user name. Default value "root"4 -> DataBase password. Default value ""5 -> DataBase port number. Default value "3306""""store=DBStore()store=DBStore(host='host_name',database_name='database_name',user_name='user_name',password='password',port_number='port_number')File PersistenceIn case of File Persistence, the user can persist tokens in the local drive, by providing the absolute file path to the FileStore object.The File containsuser_mailclient_idrefresh_tokenaccess_tokengrant_tokenexpiry_timeCreate FileStore objectfromzcrmsdk.src.com.zoho.api.authenticator.storeimportFileStore"""FileStore takes the following parameter1 -> Absolute file path of the file to persist tokens"""store=FileStore(file_path='/Users/username/Documents/python_sdk_token.txt')Custom PersistenceTo use Custom Persistence, the user must implement the Abstract Base ClassTokenStoreand override the methods.fromzcrmsdk.src.com.zoho.api.authenticator.storeimportTokenStoreclassCustomStore(TokenStore):def__init__(self):passdefget_token(self,user,token):"""Parameters:user (UserSignature) : A UserSignature class instance.token (Token) : A Token (zcrmsdk.src.com.zoho.api.authenticator.OAuthToken) class instance"""# Add code to get the tokenreturnNonedefsave_token(self,user,token):"""Parameters:user (UserSignature) : A UserSignature class instance.token (Token) : A Token (zcrmsdk.src.com.zoho.api.authenticator.OAuthToken) class instance"""# Add code to save the tokendefdelete_token(self,token):"""Parameters:token (Token) : A Token (zcrmsdk.src.com.zoho.api.authenticator.OAuthToken) class instance"""# Add code to delete the tokendefget_tokens():"""Returns:list: List of stored tokens"""# Add code to get all the stored tokensdefdelete_tokens():# Add code to delete all the stored tokensConfigurationBefore you get started with creating your Python application, you need to register your client and authenticate the app with Zoho.Create an instance ofLoggerClass to log exception and API information.fromzcrmsdk.src.com.zoho.api.loggerimportLogger"""Create an instance of Logger Class that takes two parameters1 -> Level of the log messages to be logged. Can be configured by typing Logger.Levels "." and choose any level from the list displayed.2 -> Absolute file path, where messages need to be logged."""logger=Logger.get_instance(level=Logger.Levels.INFO,file_path="/Users/user_name/Documents/python_sdk_log.log")Create an instance ofUserSignatureClass that identifies the current user.fromzcrmsdk.src.com.zoho.crm.api.user_signatureimportUserSignature# Create an UserSignature instance that takes user Email as parameteruser=UserSignature(email='[email protected]')Configure API environment which decides the domain and the URL to make API calls.fromzcrmsdk.src.com.zoho.crm.api.dcimportUSDataCenter"""Configure the environmentwhich is of the pattern Domain.EnvironmentAvailable Domains: USDataCenter, EUDataCenter, INDataCenter, CNDataCenter, AUDataCenterAvailable Environments: PRODUCTION(), DEVELOPER(), SANDBOX()"""environment=USDataCenter.PRODUCTION()Create an instance of OAuthToken with the information that you get after registering your Zoho client.fromzcrmsdk.src.com.zoho.api.authenticator.oauth_tokenimportOAuthToken,TokenType"""Create a Token instance that takes the following parameters1 -> OAuth client id.2 -> OAuth client secret.3 -> REFRESH/GRANT token.4 -> token type.5 -> OAuth redirect URL. Default value is None"""token=OAuthToken(client_id='clientId',client_secret='clientSecret',token='REFRESH/ GRANT Token',token_type=TokenType.REFRESH/TokenType.GRANT,redirect_url='redirectURL')Create an instance ofTokenStoreto persist tokens, used for authenticating all the requests.fromzcrmsdk.src.com.zoho.api.authenticator.storeimportDBStore,FileStore"""DBStore takes the following parameters1 -> DataBase host name. Default value "localhost"2 -> DataBase name. Default value "zohooauth"3 -> DataBase user name. Default value "root"4 -> DataBase password. Default value ""5 -> DataBase port number. Default value "3306""""store=DBStore()#store = DBStore(host='host_name', database_name='database_name', user_name='user_name', password='password', port_number='port_number')"""FileStore takes the following parameter1 -> Absolute file path of the file to persist tokens"""#store = FileStore(file_path='/Users/username/Documents/python_sdk_tokens.txt')Create an instance ofSDKConfigcontaining the SDK Configuration.fromzcrmsdk.src.com.zoho.crm.api.sdk_configimportSDKConfig"""auto_refresh_fields (Default value is False)if True - all the modules' fields will be auto-refreshed in the background, every hour.if False - the fields will not be auto-refreshed in the background. The user can manually delete the file(s) or refresh the fields using methods from ModuleFieldsHandler(zcrmsdk/src/com/zoho/crm/api/util/module_fields_handler.py)pick_list_validation (Default value is True)A boolean field that validates user input for a pick list field and allows or disallows the addition of a new value to the list.if True - the SDK validates the input. If the value does not exist in the pick list, the SDK throws an error.if False - the SDK does not validate the input and makes the API request with the user’s input to the pick list"""config=SDKConfig(auto_refresh_fields=True,pick_list_validation=False)The path containing the absolute directory path (in the key resource_path) to store user-specific files containing information about fields in modules.resource_path='/Users/user_name/Documents/python-app'Create an instance of RequestProxy containing the proxy properties of the user.fromzcrmsdk.src.com.zoho.crm.api.request_proxyimportRequestProxy"""RequestProxy takes the following parameters1 -> Host2 -> Port Number3 -> User Name. Default value is None4 -> Password. Default value is an empty string"""request_proxy=RequestProxy(host='proxyHost',port=80)request_proxy=RequestProxy(host='proxyHost',port=80,user='userName',password='password')Initializing the ApplicationInitialize the SDK using the following code.fromzcrmsdk.src.com.zoho.crm.api.user_signatureimportUserSignaturefromzcrmsdk.src.com.zoho.crm.api.dcimportUSDataCenterfromzcrmsdk.src.com.zoho.api.authenticator.storeimportDBStore,FileStorefromzcrmsdk.src.com.zoho.api.loggerimportLoggerfromzcrmsdk.src.com.zoho.crm.api.initializerimportInitializerfromzcrmsdk.src.com.zoho.api.authenticator.oauth_tokenimportOAuthToken,TokenTypefromzcrmsdk.src.com.zoho.crm.api.sdk_configimportSDKConfigclassSDKInitializer(object):@staticmethoddefinitialize():"""Create an instance of Logger Class that takes two parameters1 -> Level of the log messages to be logged. Can be configured by typing Logger.Levels "." and choose any level from the list displayed.2 -> Absolute file path, where messages need to be logged."""logger=Logger.get_instance(level=Logger.Levels.INFO,file_path='/Users/user_name/Documents/python_sdk_log.log')# Create an UserSignature instance that takes user Email as parameteruser=UserSignature(email='[email protected]')"""Configure the environmentwhich is of the pattern Domain.EnvironmentAvailable Domains: USDataCenter, EUDataCenter, INDataCenter, CNDataCenter, AUDataCenterAvailable Environments: PRODUCTION(), DEVELOPER(), SANDBOX()"""environment=USDataCenter.PRODUCTION()"""Create a Token instance that takes the following parameters1 -> OAuth client id.2 -> OAuth client secret.3 -> REFRESH/GRANT token.4 -> token type.5 -> OAuth redirect URL."""token=OAuthToken(client_id='clientId',client_secret='clientSecret',token='REFRESH/ GRANT Token',token_type=TokenType.REFRESH/TokenType.GRANT,redirect_url='redirectURL')"""Create an instance of TokenStore1 -> Absolute file path of the file to persist tokens"""store=FileStore(file_path='/Users/username/Documents/python_sdk_tokens.txt')"""Create an instance of TokenStore1 -> DataBase host name. Default value "localhost"2 -> DataBase name. Default value "zohooauth"3 -> DataBase user name. Default value "root"4 -> DataBase password. Default value ""5 -> DataBase port number. Default value "3306""""store=DBStore()store=DBStore(host='host_name',database_name='database_name',user_name='user_name',password='password',port_number='port_number')"""auto_refresh_fields (Default value is False)if True - all the modules' fields will be auto-refreshed in the background, every hour.if False - the fields will not be auto-refreshed in the background. The user can manually delete the file(s) or refresh the fields using methods from ModuleFieldsHandler(zcrmsdk/src/com/zoho/crm/api/util/module_fields_handler.py)pick_list_validation (Default value is True)A boolean field that validates user input for a pick list field and allows or disallows the addition of a new value to the list.if True - the SDK validates the input. If the value does not exist in the pick list, the SDK throws an error.if False - the SDK does not validate the input and makes the API request with the user’s input to the pick list"""config=SDKConfig(auto_refresh_fields=True,pick_list_validation=False)"""The path containing the absolute directory path (in the key resource_path) to store user-specific files containing information about fields in modules."""resource_path='/Users/user_name/Documents/python-app'"""Create an instance of RequestProxy class that takes the following parameters1 -> Host2 -> Port Number3 -> User Name. Default value is None4 -> Password. Default value is None"""request_proxy=RequestProxy(host='host',port=8080)request_proxy=RequestProxy(host='host',port=8080,user='user',password='password')"""Call the static initialize method of Initializer class that takes the following arguments1 -> UserSignature instance2 -> Environment instance3 -> Token instance4 -> TokenStore instance5 -> SDKConfig instance6 -> resource_path7 -> Logger instance. Default value is None8 -> RequestProxy instance. Default value is None"""Initializer.initialize(user=user,environment=environment,token=token,store=store,sdk_config=config,resource_path=resource_path,logger=logger,proxy=request_proxy)SDKInitializer.initialize()You can now access the functionalities of the SDK. Refer to the sample codes to make various API calls through the SDK.Class HierarchyResponses and ExceptionsAll SDK methods return an instance of the APIResponse class.After a successful API request, theget_object()method returns an instance of theResponseWrapper(forGET) or theActionWrapper(forPOST, PUT, DELETE)Whenever the API returns an error response, theget_object()returns an instance ofAPIExceptionclass.ResponseWrapper(forGETrequests) andActionWrapper(forPOST, PUT, DELETErequests) are the expected objects for Zoho CRM APIs’ responsesHowever, some specific operations have different expected objects, such as the following:Operations involving records in TagsRecordActionWrapperGetting Record Count for a specific Tag operationCountWrapperOperations involving BaseCurrencyBaseCurrencyActionWrapperLead convert operationConvertActionWrapperRetrieving Deleted records operationDeletedRecordsWrapperRecord image download operationFileBodyWrapperMassUpdate record operationsMassUpdateActionWrapperMassUpdateResponseWrapperGET RequestsTheget_object()returns an instance of one of the following classes, based on the return type.Forapplication/jsonresponsesResponseWrapperCountWrapperDeletedRecordsWrapperMassUpdateResponseWrapperAPIExceptionForfile downloadresponsesFileBodyWrapperAPIExceptionPOST, PUT, DELETE RequestsThegetObject()returns an instance of one of the following classesActionWrapperRecordActionWrapperBaseCurrencyActionWrapperMassUpdateActionWrapperConvertActionWrapperAPIExceptionThese wrapper classes may contain one or a list of instances of the following classes, depending on the response.SuccessResponse Class, if the request was successful.APIException Class, if the request was erroneous.For example, when you insert two records, and one of them was inserted successfully while the other one failed, the ActionWrapper will contain one instance each of the SuccessResponse and APIException classes.All other exceptions such as SDK anomalies and other unexpected behaviours are thrown under the SDKException class.Threading in the Python SDKThreads in a Python program help you achieve parallelism. By using multiple threads, you can make a Python program run faster and do multiple things simultaneously.ThePython SDK(from version 3.x.x) supports both single-user and multi-user app.Multithreading in a Multi-user AppMulti-threading for multi-users is achieved using Initializer's staticswitch_user()method.# without proxyInitializer.switch_user(user=user,environment=environment,token=token,sdk_config=sdk_config_instance)# with proxyInitializer.switch_user(user=user,environment=environment,token=token,sdk_config=sdk_config_instance,proxy=request_proxy)Here is a sample code to depict multi-threading for a multi-user app.importthreadingfromzcrmsdk.src.com.zoho.crm.api.user_signatureimportUserSignaturefromzcrmsdk.src.com.zoho.crm.api.dcimportUSDataCenter,EUDataCenterfromzcrmsdk.src.com.zoho.api.authenticator.storeimportDBStorefromzcrmsdk.src.com.zoho.api.loggerimportLoggerfromzcrmsdk.src.com.zoho.crm.api.initializerimportInitializerfromzcrmsdk.src.com.zoho.api.authenticator.oauth_tokenimportOAuthToken,TokenTypefromzcrmsdk.src.com.zoho.crm.api.recordimport*fromzcrmsdk.src.com.zoho.crm.api.request_proxyimportRequestProxyfromzcrmsdk.src.com.zoho.crm.api.sdk_configimportSDKConfigclassMultiThread(threading.Thread):def__init__(self,environment,token,user,module_api_name,sdk_config,proxy=None):super().__init__()self.environment=environmentself.token=tokenself.user=userself.module_api_name=module_api_nameself.sdk_config=sdk_configself.proxy=proxydefrun(self):try:Initializer.switch_user(user=self.user,environment=self.environment,token=self.token,sdk_config=self.sdk_config,proxy=self.proxy)print('Getting records for User: '+Initializer.get_initializer().user.email)response=RecordOperations().get_records(self.module_api_name)ifresponseisnotNone:# Get the status code from responseprint('Status Code: '+str(response.get_status_code()))ifresponse.get_status_code()in[204,304]:print('No Content'ifresponse.get_status_code()==204else'Not Modified')return# Get object from responseresponse_object=response.get_object()ifresponse_objectisnotNone:# Check if expected ResponseWrapper instance is received.ifisinstance(response_object,ResponseWrapper):# Get the list of obtained Record instancesrecord_list=response_object.get_data()forrecordinrecord_list:forkey,valueinrecord.get_key_values().items():print(key+" : "+str(value))# Check if the request returned an exceptionelifisinstance(response_object,APIException):# Get the Statusprint("Status: "+response_object.get_status().get_value())# Get the Codeprint("Code: "+response_object.get_code().get_value())print("Details")# Get the details dictdetails=response_object.get_details()forkey,valueindetails.items():print(key+' : '+str(value))# Get the Messageprint("Message: "+response_object.get_message().get_value())exceptExceptionase:print(e)@staticmethoddefcall():logger=Logger.get_instance(level=Logger.Levels.INFO,file_path="/Users/user_name/Documents/python_sdk_log.log")user1=UserSignature(email="[email protected]")token1=OAuthToken(client_id="clientId1",client_secret="clientSecret1",token="GRANT Token",token_type=TokenType.GRANT)environment1=USDataCenter.PRODUCTION()store=DBStore()sdk_config_1=SDKConfig(auto_refresh_fields=True,pick_list_validation=False)resource_path='/Users/user_name/Documents/python-app'user1_module_api_name='Leads'user2_module_api_name='Contacts'environment2=EUDataCenter.SANDBOX()user2=UserSignature(email="[email protected]")sdk_config_2=SDKConfig(auto_refresh_fields=False,pick_list_validation=True)token2=OAuthToken(client_id="clientId2",client_secret="clientSecret2",token="REFRESH Token",token_type=TokenType.REFRESH,redirect_url="redirectURL")request_proxy_user_2=RequestProxy("host",8080)Initializer.initialize(user=user1,environment=environment1,token=token1,store=store,sdk_config=sdk_config_1,resource_path=resource_path,logger=logger)t1=MultiThread(environment1,token1,user1,user1_module_api_name,sdk_config_1)t2=MultiThread(environment2,token2,user2,user2_module_api_name,sdk_config_2,request_proxy_user_2)t1.start()t2.start()t1.join()t2.join()MultiThread.call()The program execution starts fromcall().The details ofuser1are given in the variables user1, token1, environment1.Similarly, the details of another useruser2are given in the variables user2, token2, environment2.For each user, an instance ofMultiThread classis created.When thestart()is called which in-turn invokes therun(), the details of user1 are passed to theswitch_usermethod through theMultiThread object. Therefore, this creates a thread for user1.Similarly, When thestart()is invoked again, the details of user2 are passed to theswitch_userfunction through theMultiThread object. Therefore, this creates a thread for user2.Multi-threading in a Single User AppHere is a sample code to depict multi-threading for a single-user app.importthreadingfromzcrmsdk.src.com.zoho.crm.api.user_signatureimportUserSignaturefromzcrmsdk.src.com.zoho.crm.api.dcimportUSDataCenterfromzcrmsdk.src.com.zoho.api.authenticator.storeimportDBStorefromzcrmsdk.src.com.zoho.api.loggerimportLoggerfromzcrmsdk.src.com.zoho.crm.api.initializerimportInitializerfromzcrmsdk.src.com.zoho.api.authenticator.oauth_tokenimportOAuthToken,TokenTypefromzcrmsdk.src.com.zoho.crm.api.sdk_configimportSDKConfigfromzcrmsdk.src.com.zoho.crm.api.recordimport*classMultiThread(threading.Thread):def__init__(self,module_api_name):super().__init__()self.module_api_name=module_api_namedefrun(self):try:print("Calling Get Records for module: "+self.module_api_name)response=RecordOperations().get_records(self.module_api_name)ifresponseisnotNone:# Get the status code from responseprint('Status Code: '+str(response.get_status_code()))ifresponse.get_status_code()in[204,304]:print('No Content'ifresponse.get_status_code()==204else'Not Modified')return# Get object from responseresponse_object=response.get_object()ifresponse_objectisnotNone:# Check if expected ResponseWrapper instance is received.ifisinstance(response_object,ResponseWrapper):# Get the list of obtained Record instancesrecord_list=response_object.get_data()forrecordinrecord_list:forkey,valueinrecord.get_key_values().items():print(key+" : "+str(value))# Check if the request returned an exceptionelifisinstance(response_object,APIException):# Get the Statusprint("Status: "+response_object.get_status().get_value())# Get the Codeprint("Code: "+response_object.get_code().get_value())print("Details")# Get the details dictdetails=response_object.get_details()forkey,valueindetails.items():print(key+' : '+str(value))# Get the Messageprint("Message: "+response_object.get_message().get_value())exceptExceptionase:print(e)@staticmethoddefcall():logger=Logger.get_instance(level=Logger.Levels.INFO,file_path="/Users/user_name/Documents/python_sdk_log.log")user=UserSignature(email="[email protected]")token=OAuthToken(client_id="clientId",client_secret="clientSecret",token="GRANT Token",token_type=TokenType.GRANT,redirect_url="redirectURL")environment=USDataCenter.PRODUCTION()store=DBStore()sdk_config=SDKConfig()resource_path='/Users/user_name/Documents/python-app'Initializer.initialize(user=user,environment=environment,token=token,store=store,sdk_config=sdk_config,resource_path=resource_path,logger=logger)t1=MultiThread('Leads')t2=MultiThread('Quotes')t1.start()t2.start()t1.join()t2.join()MultiThread.call()The program execution starts fromcall()where the SDK is initialized with the details of the user.When thestart()is called which in-turn invokes the run(), the module_api_name is switched through the MultiThread object. Therefore, this creates a thread for the particular MultiThread instance.SDK Sample codefromdatetimeimportdatetimefromzcrmsdk.src.com.zoho.crm.api.user_signatureimportUserSignaturefromzcrmsdk.src.com.zoho.crm.api.dcimportUSDataCenterfromzcrmsdk.src.com.zoho.api.authenticator.storeimportDBStorefromzcrmsdk.src.com.zoho.api.loggerimportLoggerfromzcrmsdk.src.com.zoho.crm.api.initializerimportInitializerfromzcrmsdk.src.com.zoho.api.authenticator.oauth_tokenimportOAuthToken,TokenTypefromzcrmsdk.src.com.zoho.crm.api.recordimport*fromzcrmsdk.src.com.zoho.crm.apiimportHeaderMap,ParameterMapfromzcrmsdk.src.com.zoho.crm.api.sdk_configimportSDKConfigclassRecord(object):def__init__(self):pass@staticmethoddefget_records():"""Create an instance of Logger Class that takes two parameters1 -> Level of the log messages to be logged. Can be configured by typing Logger.Levels "." and choose any level from the list displayed.2 -> Absolute file path, where messages need to be logged."""logger=Logger.get_instance(level=Logger.Levels.INFO,file_path="/Users/user_name/Documents/python_sdk_log.log")# Create an UserSignature instance that takes user Email as parameteruser=UserSignature(email="[email protected]")"""Configure the environmentwhich is of the pattern Domain.EnvironmentAvailable Domains: USDataCenter, EUDataCenter, INDataCenter, CNDataCenter, AUDataCenterAvailable Environments: PRODUCTION(), DEVELOPER(), SANDBOX()"""environment=USDataCenter.PRODUCTION()"""Create a Token instance that takes the following parameters1 -> OAuth client id.2 -> OAuth client secret.3 -> REFRESH/GRANT token.4 -> token type.5 -> OAuth redirect URL."""token=OAuthToken(client_id="clientId",client_secret="clientSecret",token="REFRESH/ GRANT Token",token_type=TokenType.REFRESH/TokenType.GRANT,redirect_url="redirectURL")"""Create an instance of TokenStore1 -> DataBase host name. Default value "localhost"2 -> DataBase name. Default value "zohooauth"3 -> DataBase user name. Default value "root"4 -> DataBase password. Default value ""5 -> DataBase port number. Default value "3306""""store=DBStore()"""auto_refresh_fields (Default value is False)if True - all the modules' fields will be auto-refreshed in the background, every hour.if False - the fields will not be auto-refreshed in the background. The user can manually delete the file(s) or refresh the fields using methods from ModuleFieldsHandler(zcrmsdk/src/com/zoho/crm/api/util/module_fields_handler.py)pick_list_validation (Default value is True)A boolean field that validates user input for a pick list field and allows or disallows the addition of a new value to the list.if True - the SDK validates the input. If the value does not exist in the pick list, the SDK throws an error.if False - the SDK does not validate the input and makes the API request with the user’s input to the pick list"""config=SDKConfig(auto_refresh_fields=True,pick_list_validation=False)"""The path containing the absolute directory path (in the key resource_path) to store user-specific files containing information about fields in modules."""resource_path='/Users/user_name/Documents/python-app'"""Call the static initialize method of Initializer class that takes the following arguments1 -> UserSignature instance2 -> Environment instance3 -> Token instance4 -> TokenStore instance5 -> SDKConfig instance6 -> resource_path7 -> Logger instance"""Initializer.initialize(user=user,environment=environment,token=token,store=store,sdk_config=config,resource_path=resource_path,logger=logger)try:module_api_name='Leads'param_instance=ParameterMap()param_instance.add(GetRecordsParam.converted,'both')param_instance.add(GetRecordsParam.cvid,'12712717217218')header_instance=HeaderMap()header_instance.add(GetRecordsHeader.if_modified_since,datetime.now())response=RecordOperations().get_records(module_api_name,param_instance,header_instance)ifresponseisnotNone:# Get the status code from responseprint('Status Code: '+str(response.get_status_code()))ifresponse.get_status_code()in[204,304]:print('No Content'ifresponse.get_status_code()==204else'Not Modified')return# Get object from responseresponse_object=response.get_object()ifresponse_objectisnotNone:# Check if expected ResponseWrapper instance is received.ifisinstance(response_object,ResponseWrapper):# Get the list of obtained Record instancesrecord_list=response_object.get_data()forrecordinrecord_list:# Get the ID of each Recordprint("Record ID: "+record.get_id())# Get the createdBy User instance of each Recordcreated_by=record.get_created_by()# Check if created_by is not Noneifcreated_byisnotNone:# Get the Name of the created_by Userprint("Record Created By - Name: "+created_by.get_name())# Get the ID of the created_by Userprint("Record Created By - ID: "+created_by.get_id())# Get the Email of the created_by Userprint("Record Created By - Email: "+created_by.get_email())# Get the CreatedTime of each Recordprint("Record CreatedTime: "+str(record.get_created_time()))ifrecord.get_modified_time()isnotNone:# Get the ModifiedTime of each Recordprint("Record ModifiedTime: "+str(record.get_modified_time()))# Get the modified_by User instance of each Recordmodified_by=record.get_modified_by()# Check if modified_by is not Noneifmodified_byisnotNone:# Get the Name of the modified_by Userprint("Record Modified By - Name: "+modified_by.get_name())# Get the ID of the modified_by Userprint("Record Modified By - ID: "+modified_by.get_id())# Get the Email of the modified_by Userprint("Record Modified By - Email: "+modified_by.get_email())# Get the list of obtained Tag instance of each Recordtags=record.get_tag()iftagsisnotNone:fortagintags:# Get the Name of each Tagprint("Record Tag Name: "+tag.get_name())# Get the Id of each Tagprint("Record Tag ID: "+tag.get_id())# To get particular field valueprint("Record Field Value: "+str(record.get_key_value('Last_Name')))print('Record KeyValues: ')forkey,valueinrecord.get_key_values().items():print(key+" : "+str(value))# Check if the request returned an exceptionelifisinstance(response_object,APIException):# Get the Statusprint("Status: "+response_object.get_status().get_value())# Get the Codeprint("Code: "+response_object.get_code().get_value())print("Details")# Get the details dictdetails=response_object.get_details()forkey,valueindetails.items():print(key+' : '+str(value))# Get the Messageprint("Message: "+response_object.get_message().get_value())exceptExceptionase:print(e)Record.get_records() |
zcrmsdk-2.1 | Failed to fetch description. HTTP Status Code: 404 |
zcross | ZCrossZCross is a python library used to read low pressure gas sections from various sources likeLXCat.InstallationTo install this package just use pip:pipinstallzcrossCross section databases are not provided byZCross: however, it is possible to download the cross section tables of interest from thedownload sectionofLXCat.
Once you download the cross sections inXMLformat, you can save it somewhere (we suggest under/opt/zcross_data) and to define an enviroment variable pointing to that path:exportZCROSS_DATA=/opt/zcross_data(you can add it to your.profilefile)ExamplesList the database availables:importzcrosszs=zcross.load_all()# be patient, it will take a while ...forzinzs:print(z.database)Show the groups and references of a speficic database:importzcrossz=zcross.load_by_name('ccc')forgroupinz.database:print(group)forreferenceinz.database.references:print('[{}]:'.format(reference.type))fork,vinreference.items():print('{:<10}:{}'.format(k,v))Show the process of a specific group:importzcrossz=zcross.load_by_name('itikawa')group=z.database[0]forprocessingroup:print("Process{}:{}".format(process.id,process.get_simple_type()))print("Comment:{}\n".format(process.comment))Show the cross section table of a specific process:importzcrossz=zcross.load_by_name('phelps')process=z.database['H2O'][5]print('Reaction:')print(process.get_reaction())print('Energy [{}],\tArea [{}]'.format(process.energy_units,process.cross_section_units))forenergy,areainprocess:print('{:8.2f}\t{:e}'.format(energy,area)) |
zc.rst2 | Script for converting from reStructuredText to various formats.Usage: rst2 writer [arguments]The first argument is tha name of a docutils “writer”.Changes0.2 (2006-02-04)Added support for loading custom reStructuredText directives from
rst.directive entry points. |
zcs | Z Configuration System: a flexible powerful configuration system which takes advantage of both argparse and yacs |
zc.s3uploadqueue | zc.s3uploadqueueContentszc.s3uploadqueueChanges0.1.1 (2012-06-15)0.1.0 (2012-06-15)This package provides a handy script to upload files to Amazon S3
asynchronously. To learn more, seesrc/zc/s3uploadqueue/README.txtChanges0.1.1 (2012-06-15)Use ‘OrdinaryCallingFormat’ to HTTPS certificate validation works
when accessing buckets with period in the name.0.1.0 (2012-06-15)Initial release |
zcs-azzurro-api | ZCS Azzurro client for PythonThe unofficial python client for ZCS Azzurro API.InstructionsAccess rights and credentials has to be asked to ZCS Azzurro support.
The maintainer of this package does not manage access,
which is in total control of the ZCS Azzurro APIs developers.DisclaimerThis package is not managed by the developer of the APIs and comes with no warranty about the functioning or the timing.
It is not guaranteed also the full implementation of all the functions of the APIs itself.
The package could be discontinued at any time and it might be outdated with the APIs,
though it will be updated as much as possible.
You can use it under license conditions and under your responsibility.Keep this on going >Support |
zc.sbo | System BuildoutsThe system buildout script (sbo) provided by the zc.sbo package is
used to perform “system” buildouts that write to system directories
on unix-like systems. They are run usingsudoor asrootso
they can write to system directiories. You can install thesbocommand into a Python environment using its setup script or a tool
like easy_install.One installed, thesbocommand is typically run with 2 arguments:The name of an applicationThe name of a configurationIt expects the application to be a build-based application in/opt/APPLICATION-NAME. It expects to find a buildout
configuration in/etc/APPLICATION-NAME/CONFIG-NAME.cfgFor example, if invoked with:sbo myapp abccorpIt will run:/opt/myapp/bin/buildout buildout:directory=/opt/myapp \
-oUc /etc/myapp/sbccorp.cfgRun with the -h option to get additional help.Changes0.6.1 (2011-03-10)Add missing –version option to report sbo’s version.0.6.0 (2010-11-05)Add –installation to point to a specific software installation to use.0.1.0 (yyyy-mm-dd)Initial release |
zcscommonlib | ZCSCommonLibraryA Common Library For Use In Computer Science ProjectsPIP Package: zcscommonlibCurrent Version:License: Mozilla Public License Version 2.0Importing The Libraryfromzcscommonlibimportfunctionsaszcs# Then use the functions as zcs.function()Build The LibraryPrepare the library for development and build it.pip install -r requirements.txt
python setup.py bdist_wheel
pip install ./dist/zcscommonlib-VERSION-py3-none-any.whlRunning TestsRun tests on the functions listed intest_functions.py.python setup.py pytestWiki/DocumentationAll documentation for ZCSCommonLibrary is availablehere. |
zc.security | UNKNOWN |
zc.selenium | Selenium testing for Zope 3This package provides an easy way to use Selenium tests for Zope 3
applications. It provides Selenium itself as a resource directory,
and it provides a test suite listing generated from registered views,
allowing different packages to provide tests without a central list of
tests to be maintained.Selenium test views can also be written in Python using thezc.selenium.pytestmodule. This can make tests substantially easier
to write.The file pytest.txt explains how to write tests using the Python format.The package also provides a test runner that:Runs a Zope instanceStarts a local browser, if necessary,Tells the local browser to run the tests.See selenium.txt to see how to set up and use the test runner.Selenium IssuesThere is a known issue in the included version of Selenium; this
affects clicking on images in MSIE. The Selenium bug report for this
problem is here:http://jira.openqa.org/browse/SRC-99A patch in the file is provided in the file:Selenium-Core-SRC-99.patchIt is not known whether this patch should always be applied.CHANGES1.2.1 (2009-02-16)Added missing tests.zcml.1.2.0 (2009-01-22)Moved self-test from configure.zcml to tests.zcml to not automatically
include them when zc.selenium is included.pytest’s selenium coverts arguments to strings now. This allows calls likeself.selenium.pause(500).1.1.0 (2009-01-19)Feature: Updated to the latest Selenium Core release 0.8.3.Feature: Added a–base-urloption to the selenium script, so that one is
not dependent to include the default layer in the default skin. (Who does
this? What a security hole!)Feature: Added a-toption to filter selenium tests by regexps. You can
also specify multiple-toptions.Bug: Added documentation on how to setupzc.selenium.Bug: Allow wsgi option to work with python 2.5First public release.1.0.0 (2008-03-27)Internal release. |
zc.sentrywsgi | This is a thin wrapper around theravenmiddleware which ensures SSL
validation is performed and logging configuration is also applied.Release history1.1.0 (2014-11-26)Update to a much newerraven, and get out of the business of
rewiring SSL support.1.0.1 (2014-11-26)Fixrequestsdependency to reflect the minimum version that providesmax_retriesas a contructor argument forrequests.adapters.HTTPAdapter.1.0.0 (2014-11-24)Initial release. |
zc.set | 0.2 (2020-05-14)Make Python-3 compatibleAdd support for PyPy and PyPy3.0.1 (2007-05-09)Was never released to PyPIPersistent sets are persistent objects that have the API of standard
Python sets. The persistent set should work the same as normal sets,
except that changes to them are persistent.They have the same limitation as persistent lists and persistent
mappings, as found in thepersistentpackage: unlikeBTreepackage
data structures, changes copy the entire object in the database. This
generally means that persistent sets, like persistent lists and
persistent mappings, are inappropriate for very large collections. For
those, useBTreedata structures.The rest of this file is tests, not documentation. Find out about the
Python set API from standard Python documentation
(http://docs.python.org/lib/types-set.html, for instance) and find out about
persistence in the ZODB documentation
(http://www.zope.org/Wikis/ZODB/FrontPage/guide/index.html, for instance).The persistent set module contains a simple persistent version of a set, that
inherits from persistent.Persistent and marks _p_changed = True for any
potentially mutating operation.>>> from ZODB.tests.util import DB
>>> db = DB()
>>> conn = db.open()
>>> root = conn.root()
>>> import zope.app.folder # import rootFolder
>>> app = root['Application'] = zope.app.folder.rootFolder()
>>> import transaction
>>> transaction.commit()>>> from zc.set import Set
>>> s = Set()
>>> app['s'] = s
>>> transaction.commit()>>> import persistent.interfaces
>>> persistent.interfaces.IPersistent.providedBy(s)
True
>>> original = factory() # set in one test run; a persistent set in another
>>> sorted(set(dir(original)) - set(dir(s)))
[]add sets _p_changed>>> s._p_changed = False
>>> s.add(1) # add
>>> s._p_changed
True__repr__ includes module, class, and a contents view like a normal set>>> s # __repr__
zc.set.Set([1])update works as normal, but sets _p_changed>>> s._p_changed = False
>>> s.update((2,3,4,5,6,7)) # update
>>> s._p_changed
True__iter__ works>>> sorted(s) # __iter__
[1, 2, 3, 4, 5, 6, 7]__len__ works>>> len(s)
7as does __contains__>>> 3 in s
True
>>> 'kumquat' in s
False__gt__, __ge__, __eq__, __ne__, __lt__, and __le__ work normally,
equating with normal set, at least if spelled in the right direction.>>> s > original
True
>>> s >= original
True
>>> s < original
False
>>> s <= original
False
>>> s == original
False
>>> s != original
True>>> original.update(s)
>>> s > original
False
>>> s >= original
True
>>> s < original
False
>>> s <= original
True
>>> s == original
True
>>> s != original
False>>> original.add(8)
>>> s > original
False
>>> s >= original
False
>>> s < original
True
>>> s <= original
True
>>> s == original
False
>>> s != original
TrueI don’t know what __cmp__ is supposed to do–it doesn’t work with sets–so
I won’t test it.issubset and issuperset work when it is a subset.>>> s.issubset(original)
True
>>> s.issuperset(original)
False__ior__ works, including setting _p_changed>>> s._p_changed = False
>>> s |= original
>>> s._p_changed
True
>>> s == original
Trueissubset and issuperset work when sets are equal.>>> s.issubset(original)
True
>>> s.issuperset(original)
Trueissubset and issuperset work when it is a superset.>>> s.add(9)
>>> s.issubset(original)
False
>>> s.issuperset(original)
True__hash__ works, insofar as raising an error as it is supposed to.>>> hash(original)
Traceback (most recent call last):
...
TypeError:...unhashable...__iand__ works, including setting _p_changed>>> s._p_changed = False
>>> s &= original
>>> s._p_changed
True
>>> sorted(s)
[1, 2, 3, 4, 5, 6, 7, 8]__isub__ works, including setting _p_changed>>> s._p_changed = False
>>> s -= factory((1, 2, 3, 4, 5, 6, 7))
>>> s._p_changed
True
>>> sorted(s)
[8]__ixor__ works, including setting _p_changed>>> s._p_changed = False
>>> s ^= original
>>> s._p_changed
True
>>> sorted(s)
[1, 2, 3, 4, 5, 6, 7]difference_update works, including setting _p_changed>>> s._p_changed = False
>>> s.difference_update((7, 8))
>>> s._p_changed
True
>>> sorted(s)
[1, 2, 3, 4, 5, 6]intersection_update works, including setting _p_changed>>> s._p_changed = False
>>> s.intersection_update((2, 3, 4, 5, 6, 7))
>>> s._p_changed
True
>>> sorted(s)
[2, 3, 4, 5, 6]symmetric_difference_update works, including setting _p_changed>>> s._p_changed = False
>>> original.add(9)
>>> s.symmetric_difference_update(original)
>>> s._p_changed
True
>>> sorted(s)
[1, 7, 8, 9]remove works, including setting _p_changed>>> s._p_changed = False
>>> s.remove(1)
>>> s._p_changed
True
>>> sorted(s)
[7, 8, 9]If it raises an error, _p_changed is not set.>>> s._p_changed = False
>>> s.remove(1)
Traceback (most recent call last):
...
KeyError: 1
>>> s._p_changed
False
>>> sorted(s)
[7, 8, 9]discard works, including setting _p_changed>>> s._p_changed = False
>>> s.discard(9)
>>> s._p_changed
True
>>> sorted(s)
[7, 8]If you discard something that wasn’t in the set, _p_changed will still
be set. This is an efficiency decision, rather than our desired behavior,
necessarily.>>> s._p_changed = False
>>> s.discard(9)
>>> s._p_changed
True
>>> sorted(s)
[7, 8]pop works, including setting _p_changed>>> s._p_changed = False
>>> s.pop() in (7, 8)
True
>>> s._p_changed
True
>>> len(s)
1clear works, including setting _p_changed>>> s._p_changed = False
>>> s.clear()
>>> s._p_changed
True
>>> len(s)
0The methods that return sets all return persistent sets. They otherwise
work identically.__and__>>> s.update((0,1,2,3,4))
>>> res = s & original
>>> sorted(res)
[1, 2, 3, 4]
>>> res.__class__ is s.__class__
True__or__>>> res = s | original
>>> sorted(res)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> res.__class__ is s.__class__
True__sub__>>> res = s - original
>>> sorted(res)
[0]
>>> res.__class__ is s.__class__
True__xor__>>> res = s ^ original
>>> sorted(res)
[0, 5, 6, 7, 8, 9]
>>> res.__class__ is s.__class__
True__rand__>>> res = set((3,4,5)) & s
>>> sorted(res)
[3, 4]
>>> res.__class__ is s.__class__
True__ror__>>> res = set((3,4,5)) | s
>>> sorted(res)
[0, 1, 2, 3, 4, 5]
>>> res.__class__ is s.__class__
True__rsub__>>> res = set((3,4,5)) - s
>>> sorted(res)
[5]
>>> res.__class__ is s.__class__
True__rxor__>>> res = set((3,4,5)) ^ s
>>> sorted(res)
[0, 1, 2, 5]
>>> res.__class__ is s.__class__
Truedifference>>> res = s.difference((3,4,5))
>>> sorted(res)
[0, 1, 2]
>>> res.__class__ is s.__class__
Trueintersection>>> res = s.intersection((3,4,5))
>>> sorted(res)
[3, 4]
>>> res.__class__ is s.__class__
Truesymmetric_difference>>> res = s.symmetric_difference((3,4,5))
>>> sorted(res)
[0, 1, 2, 5]
>>> res.__class__ is s.__class__
Trueunion>>> res = s.union((3,4,5))
>>> sorted(res)
[0, 1, 2, 3, 4, 5]
>>> res.__class__ is s.__class__
Truecopy returns…a copy.>>> res = s.copy()
>>> res == s
True
>>> res.__class__ is s.__class__
True |
zc.shortcut | ShortcutsShortcuts are objects that allow other objects (theirtarget) to appear to
be located in places other than the target’s actual location. They are
somewhat like a symbolic link in Unix-like operating systems.Creating a shortcutShortcuts are created by calling theShortcutclass’s constructor with a
target, parent, and name:>>> from zc.shortcut.shortcut import Shortcut
>>> class MyTarget:
... attr = 'hi'
... __parent__ = 'Original Parent'
... __name__ = 'Original Name'
>>> target = MyTarget()
>>> sc = Shortcut(target)
>>> sc.__parent__ = 'My Parent'
>>> sc.__name__ = 'My Name'A shortcut provides an attribute to access its target:>>> sc.target
<__builtin__.MyTarget instance at ...>A shortcut’s __parent__ and __name__ are independent of their target:>>> sc.__parent__
'My Parent'
>>> sc.target.__parent__
'Original Parent'
>>> sc.__name__
'My Name'
>>> sc.target.__name__
'Original Name'But the target knows the traversal parent, the traversal name, and the
shortcut. This allows the shortcut to have annotations that may be accessed
by views and other components that render or use the target.>>> sc.target.__traversed_parent__
'My Parent'
>>> sc.target.__traversed_name__
'My Name'
>>> sc.target.__shortcut__ is sc
TrueSee proxy.txt and adapters.txt for more detailsShortcut-related proxiesThezc.shortcut.proxymodule includes some code useful outside of
the shortcut package and some code specifically for shortcut usage.The generally useful code includes a decorator class that puts decorator
interfaces before all of the interfaces of the wrapped object (the opposite of
the behavior ofzope.app.decorator.DecoratorSpecificationDescriptor). It
also includes a specialimplements()function that should be used to declare
that a proxy implements a given set of interfaces. Using thezope.interface.implements()function instead will causeinterface.directlyProvides()to fail on the proxied object (and will also
have side effects possibly causing other proxies with the same base class to
also be broken.>>> from zope import interface
>>> from zc.shortcut import proxy
>>> class I1(interface.Interface):
... pass
...
>>> class I2(interface.Interface):
... pass
...
>>> class I3(interface.Interface):
... pass
...
>>> class I4(interface.Interface):
... pass
...
>>> class D1(proxy.Decorator):
... proxy.implements(I1)
...
>>> class D2(proxy.Decorator):
... proxy.implements(I2)
...
>>> class X(object):
... interface.implements(I3)
...
>>> x = X()
>>> [i.getName() for i in interface.providedBy(D1(x))]
['I1', 'I3']
>>> [i.getName() for i in interface.providedBy(D2(D1(x)))]
['I2', 'I1', 'I3']
>>> dec_x = D2(D1(X()))
>>> interface.directlyProvides(dec_x, I4)
>>> [i.getName() for i in interface.providedBy(dec_x)]
['I2', 'I1', 'I4', 'I3']Target proxiesTarget proxies are the primary shortcut-specific proxy type.
When a shortcut is asked for its target it actually returns a proxy:>>> from zc.shortcut.shortcut import Shortcut
>>> class MyTarget:
... attr = 'hi'
... __parent__ = 'Original Parent'
... __name__ = 'Original Name'
>>> target = MyTarget()
>>> sc = Shortcut(target)
>>> sc.__parent__ = 'My Parent'
>>> sc.__name__ = 'My Name'
>>> proxy = sc.target
>>> proxy is target
FalseThe proxy acts as the target:>>> proxy == target
True>>> target.__parent__
'Original Parent'
>>> proxy.__parent__
'Original Parent'>>> target.__name__
'Original Name'
>>> proxy.__name__
'Original Name'>>> target.attr
'hi'
>>> proxy.attr
'hi'The proxy also has attributes point to the shortcut and its parent and
name:>>> proxy.__shortcut__ is sc
True
>>> proxy.__traversed_parent__
'My Parent'
>>> proxy.__traversed_name__
'My Name'As discussed in adapters.txt, once a traversal passes through a shortcut, all
contained objects receive their own target proxies even if they did not
themselves come from a shortcut. They have__traversed_parent__and__traversed_name__attributes, pointing to the target proxy of the object
traversed to find them and the name used, respectively, but no__shortcut__attribute: they effectively implementinterfaces.ITraversalProxyand notinterfaces.ITargetProxy.Target proxies and the zope interface package are able to coexist with one
another happily. For instance, consider the case ofdirectlyProvides():>>> list(interface.providedBy(target))
[]
>>> import pprint
>>> pprint.pprint(list(interface.providedBy(proxy)))
[<InterfaceClass zc.shortcut.interfaces.ITargetProxy>]
>>> class IDummy(interface.Interface):
... "dummy interface"
...
>>> interface.directlyProvides(proxy, IDummy)
>>> pprint.pprint(list(interface.providedBy(proxy)))
[<InterfaceClass zc.shortcut.interfaces.ITargetProxy>,
<InterfaceClass __builtin__.IDummy>]
>>> list(interface.providedBy(target))
[<InterfaceClass __builtin__.IDummy>]AdaptersAdapters are provided to allow a shortcut to act as the target would when
traversed.ITraversableFirst we have to import the interfaces we’ll be working with:>>> from zope.publisher.interfaces import IRequest
>>> from zope.publisher.interfaces.browser import IBrowserPublisher
>>> from zope.traversing.interfaces import ITraversable
>>> from zc.shortcut.interfaces import IShortcut
>>> from zope.location.interfaces import ILocation
>>> from zc.shortcut import interfacesIf we have a target object with a root:>>> from zope import interface, component
>>> class ISpam(interface.Interface):
... pass
>>> class Spam:
... interface.implements(ISpam, ILocation)
... def __init__(self, parent, name):
... self.__parent__ = parent
... self.__name__ = name
>>> from zope.traversing.interfaces import IContainmentRoot
>>> class DummyContainmentRoot(object):
... __parent__ = __name__ = None
... interface.implements(IContainmentRoot)
...
>>> root = DummyContainmentRoot()
>>> real_parent = Spam(root, 'real_parent')
>>> target = Spam(real_parent, 'target')The target object provides a multiadapter for the target and request to an
ITraversable so it can be traversed:>>> class SpamTraversableAdapter:
... interface.implements(ITraversable)
... component.adapts(ISpam, IRequest)
... def __init__(self, spam, request):
... self.spam = spam
>>> component.provideAdapter(SpamTraversableAdapter, name='view')There is an adapter to return the target object adapted to ITraversable when
a shortcut and request is adapted to ITraversable. For example if we create
a shortcut to our target:>>> from zc.shortcut.shortcut import Shortcut
>>> shortcut = Shortcut(target)
>>> shortcut_parent = Spam(root, 'shortcut_parent')
>>> shortcut.__parent__ = shortcut_parent
>>> shortcut.__name__ = 'shortcut'And call the adapter with a request:>>> from zope.publisher.browser import TestRequest
>>> from zc.shortcut.adapters import ShortcutTraversalAdapterFactory
>>> request = TestRequest()
>>> adapter = ShortcutTraversalAdapterFactory(shortcut, request)The result is the target’s ITraversal adapter:>>> adapter
<...SpamTraversableAdapter instance at...>
>>> adapter.spam
<...Spam instance at...>Shortcut traversalShortcut traversal is unpleasantly tricky. First consider the case of
traversing a shortcut and then traversing to get the default view
(‘index.html’). In that case, the shortcut will be available to the view,
and breadcrumbs and other view elements that care about how the object was
traversed will merely need to look at the shortcut’s __parent__, or the
target proxy’s __traversed_parent__. This is not too bad.It becomes more interesting if one traverses through a shortcut to another
content object. A naive implementation will traverse the shortcut by
converting it to its target, and then traversing the target to get the
contained content object. However, views for the content object will have no
idea of the traversal path used to get to the content object: they will only
have the __parent__ of the content object, which is the shortcut’s targetwithout any target proxy. From there they will be able to find the target’s
parent, but not the traversed shortcut’s parent. Breadcrumbs and other
components that care about traversed path will be broken.In order to solve this use case, traversing a shortcut needs to traverse the
target and then wrap the resulting object in another target proxy that
holds a reference to the shortcut’s target proxy as its traversed parent.Traversing a shortcut and finding another shortcut is slightly trickier again.
In this case, the shortcut’s target’s proxy should have a parent which is the
shortcut’s proxy’s parent.Two adapters are available for IPublishTraverse: one for shortcuts, and one
for traversal proxies. If a traversal target doesn’t provide IPublishTraverse,
then it should provide an adapter:>>> from zc.shortcut import adapters
>>> from zope.publisher.interfaces import IPublishTraverse
>>> child_spam = Spam(real_parent, 'child_spam')
>>> child_shortcut = Shortcut(child_spam)
>>> child_shortcut.__parent__ = shortcut
>>> child_shortcut.__name__ = 'child_shortcut'
>>> class SpamPublishTraverseAdapter:
... interface.implements(IPublishTraverse)
... component.adapts(ISpam, IRequest)
... def __init__(self, spam, request):
... self.spam = spam
... def publishTraverse(self, request, name):
... print 'SpamPublishTraverseAdapter has been traversed.'
... return {'child_spam': child_spam,
... 'child_shortcut': child_shortcut}[name]
>>> component.provideAdapter(SpamPublishTraverseAdapter)If it does, the adapter will be used to do the traversal:>>> adapter = adapters.ShortcutPublishTraverseAdapter(shortcut, request)
>>> adapter
<...ShortcutPublishTraverseAdapter object at...>
>>> from zope.interface.verify import verifyObject
>>> verifyObject(IPublishTraverse, adapter)
True
>>> res = adapter.publishTraverse(request, 'child_spam')
SpamPublishTraverseAdapter has been traversed.Notice that the traversed object has a traversal proxy (but not a target
proxy).>>> interfaces.ITraversalProxy.providedBy(res)
True
>>> interfaces.ITargetProxy.providedBy(res)
False
>>> res.__traversed_parent__ == shortcut.target
True
>>> res.__traversed_name__
'child_spam'
>>> res.__traversed_parent__.__shortcut__ is shortcut
True
>>> res.__traversed_parent__.__traversed_parent__ is shortcut_parent
TrueTo traverse further down and still keep the traversal information, we need to
register the ProxyPublishTraverseAdapter. Notice that we will also traverse
to a shortcut this time, and look at the traversal trail up from the shortcut
and from its target.>>> component.provideAdapter(adapters.ProxyPublishTraverseAdapter)
>>> from zope import component
>>> adapter = component.getMultiAdapter((res, request), IPublishTraverse)
>>> res = adapter.publishTraverse(request, 'child_shortcut')
SpamPublishTraverseAdapter has been traversed.
>>> res.__traversed_parent__ == child_spam
True
>>> res.__traversed_name__
'child_shortcut'
>>> res.__traversed_parent__.__traversed_parent__ == shortcut.target
True
>>> res.target.__traversed_parent__.__traversed_parent__ == shortcut.target
TrueIf, instead, the target implements IPublishTraverse itself…:>>> class SpamWithPublishTraverse(Spam):
... interface.implements(IPublishTraverse)
... def publishTraverse(self, request, name):
... print 'SpamWithPublishTraverse has been traversed.'
... return {'child_spam': child_spam,
... 'child_shortcut': child_shortcut}[name]…then it’spublishTraverse()will be called directly:>>> spam = SpamWithPublishTraverse(real_parent, 'special_spam')
>>> shortcut = Shortcut(spam)
>>> shortcut.__parent__ = shortcut_parent
>>> shortcut.__name__ = 'special_spam_shortcut'
>>> adapter = adapters.ShortcutPublishTraverseAdapter(shortcut, request)
>>> adapter
<...ShortcutPublishTraverseAdapter object at...>
>>> another = adapter.publishTraverse(request, 'child_spam')
SpamWithPublishTraverse has been traversed.Ending traversal at a shortcutWhen a shortcut is the target of a URL traversal, rather than a node
along the way, the leaf-node handling of the target object must be
invoked so that the shortcut behaves in the same way as the would
would when accessed directly.When a URL from a request represents an object (rather than a view),
the publisher uses thebrowserDefault()method of theIBrowserPublisherinterface to determine how the object should be
handled. This method returns an object and a sequences of path
elements that should be traversed.For shortcuts, this is handled by delegating to the target of the
shortcut, substituting a proxy for the target so the traversedURL view
and breadcrumbs still work correctly.Let’s start by defining anIBrowserPublisherforISpamobjects:>>> class SpamBrowserPublisherAdapter(SpamPublishTraverseAdapter):
... interface.implements(IBrowserPublisher)
... def browserDefault(self, request):
... print "browserDefault for", repr(self.spam)
... return self.spam, ("@@foo.html",)
>>> component.provideAdapter(SpamBrowserPublisherAdapter,
... provides=IBrowserPublisher)
>>> adapter.browserDefault(request) # doctest: +ELLIPSIS
browserDefault for <...SpamWithPublishTraverse instance at 0x...>
(<...SpamWithPublishTraverse instance at 0x...>, ('@@foo.html',))traversedURLIf shortcuts are traversed, an absolute url can lead a user to unexpected
locations–to the real location of the object, rather than to the traversed
location. In order to get the traversed url, the adapters module provides a
traversedURL function, and the shortcut package also offers it from its
__init__.py.Given the result of the next-to-last shortcut traversal described
above, for instance, traversedURL returns a URL that behaves similarly to
absoluteURL except when it encounters target proxies, at which point the
traversal parents are used rather than the actual parents.>>> component.provideAdapter(adapters.TraversedURL)
>>> component.provideAdapter(adapters.FallbackTraversedURL)
>>> component.provideAdapter(adapters.RootTraversedURL)
>>> adapters.traversedURL(res, request)
'http://127.0.0.1/shortcut_parent/shortcut/child_spam/child_shortcut'Like absoluteURL, the returned value is html escaped.>>> shortcut_parent.__name__ = 'shortcut parent'
>>> adapters.traversedURL(res, request)
'http://127.0.0.1/shortcut%20parent/shortcut/child_spam/child_shortcut'Also like absoluteURL, traversedURL is registered as a view so it can be used
within page templates (as in context/@@traversedURL).>>> component.provideAdapter(adapters.traversedURL, name="traversedURL")
>>> component.getMultiAdapter((res, request), name='traversedURL')
'http://127.0.0.1/shortcut%20parent/shortcut/child_spam/child_shortcut'BreadcrumbsThe zc.displayname package provides a way to obtain breadcrumbs that is not
tied to the zope IAbsoluteURL interface and that takes advantage of
zc.displayname features like the display name generator. The zc.shortcut
package includes a breadcrumb adapter for the zc.displayname interface that is
aware of the traversal proxies that are part of the shortcut package.>>> import zc.displayname.adapters
>>> component.provideAdapter(zc.displayname.adapters.Breadcrumbs)
>>> component.provideAdapter(zc.displayname.adapters.TerminalBreadcrumbs)
>>> component.provideAdapter(zc.displayname.adapters.DefaultDisplayNameGenerator)
>>> component.provideAdapter(zc.displayname.adapters.SiteDisplayNameGenerator)
>>> from zope.publisher.interfaces.http import IHTTPRequest
>>> from zope.traversing.browser.interfaces import IAbsoluteURL
>>> from zope.traversing import browser
>>> component.provideAdapter(
... browser.AbsoluteURL, adapts=(None, IHTTPRequest),
... provides=IAbsoluteURL)
>>> component.provideAdapter(
... browser.SiteAbsoluteURL, adapts=(IContainmentRoot, IHTTPRequest),
... provides=IAbsoluteURL)
>>> component.provideAdapter(
... browser.AbsoluteURL, adapts=(None, IHTTPRequest),
... provides=interface.Interface, name='absolute_url')
>>> component.provideAdapter(
... browser.SiteAbsoluteURL, adapts=(IContainmentRoot, IHTTPRequest),
... provides=interface.Interface, name='absolute_url')
>>> component.provideAdapter(adapters.Breadcrumbs)
>>> from zc.displayname.interfaces import IBreadcrumbs
>>> bc = component.getMultiAdapter((res, request), IBreadcrumbs)
>>> import pprint
>>> pprint.pprint(bc()) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
({'name': u'[root]',
'name_gen': <zc.displayname.adapters.SiteDisplayNameGenerator object at ...>,
'object': <...DummyContainmentRoot object at ...>,
'url': 'http://127.0.0.1'},
{'name': 'shortcut parent',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <...Spam instance at ...>,
'url': 'http://127.0.0.1/shortcut%20parent'},
{'name': 'target',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <...Spam instance at ...>,
'url': 'http://127.0.0.1/shortcut%20parent/shortcut'},
{'name': 'child_spam',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <...Spam instance at ...>,
'url': 'http://127.0.0.1/shortcut%20parent/shortcut/child_spam'},
{'name': 'child_shortcut',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <zc.shortcut.shortcut.Shortcut object at ...>,
'url': 'http://127.0.0.1/shortcut%20parent/shortcut/child_spam/child_shortcut'})
>>> pprint.pprint(bc(6)) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
({'name': u'[root]',
'name_gen': <zc.displayname.adapters.SiteDisplayNameGenerator object at ...>,
'object': <...DummyContainmentRoot object at ...>,
'url': 'http://127.0.0.1'},
{'name': 'sho...',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <...Spam instance at ...>,
'url': 'http://127.0.0.1/shortcut%20parent'},
{'name': 'target',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <...Spam instance at ...>,
'url': 'http://127.0.0.1/shortcut%20parent/shortcut'},
{'name': 'chi...',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <...Spam instance at ...>,
'url': 'http://127.0.0.1/shortcut%20parent/shortcut/child_spam'},
{'name': 'chi...',
'name_gen': <zc.displayname.adapters.DefaultDisplayNameGenerator object at ...>,
'object': <zc.shortcut.shortcut.Shortcut object at ...>,
'url': 'http://127.0.0.1/shortcut%20parent/shortcut/child_spam/child_shortcut'})Copy and LinkThe zope.copypastemove package provides a number of interfaces to provide
copy, move, rename, and other similar operations. The shortcut package
provides a replacement implementation of copy for objects that looks up a
repository and uses it if available; an implementation of
copy that actually makes shortcuts (useful for immutable objects stored in a
repository); and an interface and two implementations, one for shortcuts and
one for other objects, for a newlinkoperation, which makes a shortcut to
the selected object.Copying an ObjectIf you want copying an object to use repositories if they are available, this
adapter provides the functionality. It is installed for all objects by
default, but could also be configured only for certain interfaces.In the example below, first we set up the dummy content objects, then we
register the necessary adapters, and then we set up some event listener code
that we use to show what events are being fired.>>> class IDummy(interface.Interface):
... pass
...
>>> import zope.app.container.interfaces
>>> class Dummy(object):
... interface.implements(
... IDummy, zope.app.container.interfaces.IContained)
>>> class DummyContainer(dict):
... interface.implements(zope.app.container.interfaces.IContainer)
... __parent__ = __name__ = None
... def __repr__(self):
... return "<%s at %d>" % (self.__class__.__name__, id(self))
...
>>> repo = DummyContainer()
>>> folder = DummyContainer()
>>> @component.adapter(IDummy)
... @interface.implementer(zope.app.container.interfaces.IContainer)
... def DummyRepoGetter(content):
... return repo
...
>>> component.provideAdapter(
... DummyRepoGetter, name=interfaces.REPOSITORY_NAME)
>>> from zope.app.container.contained import NameChooser
>>> component.provideAdapter(NameChooser, adapts=(interface.Interface,))
>>> # now, before we actually actually run the adding machinery, we'll
>>> # set up some machinery that will let us look at events firing
...
>>> heard_events = [] # we'll collect the events here
>>> from zope import event
>>> event.subscribers.append(heard_events.append)
>>> import pprint
>>> from zope import interface
>>> showEventsStart = 0
>>> def iname(ob):
... return iter(interface.providedBy(ob)).next().__name__
...
>>> def getId(ob):
... if ob is None or isinstance(ob, (int, float, basestring, tuple)):
... return "(%r)" % (ob,)
... id = getattr(ob, 'id', getattr(ob, '__name__', None))
... if not id:
... id = "a %s (%s)" % (ob.__class__.__name__, iname(ob))
... return id
...
>>> def showEvents(start=None): # to generate a friendly view of events
... global showEventsStart
... if start is None:
... start = showEventsStart
... res = [
... '%s fired for %s.' % (iname(ev), getId(ev.object))
... for ev in heard_events[start:]]
... res.sort()
... pprint.pprint(res)
... showEventsStart = len(heard_events)
...
>>> component.provideAdapter(adapters.ObjectCopier)
>>> from zope.app.container.contained import NameChooser
>>> component.provideAdapter(NameChooser, adapts=(interface.Interface,))
>>> dummy = Dummy()
>>> repo['dummy'] = dummy
>>> dummy.__parent__ = repo
>>> dummy.__name__ = 'dummy'
>>> dummy.id = 'foo'
>>> from zope import copypastemove
>>> copier = copypastemove.IObjectCopier(dummy)
>>> verifyObject(copypastemove.IObjectCopier, copier)
True
>>> copier.copyTo(folder)
'dummy'
>>> showEvents()
['IObjectCopiedEvent fired for foo.',
'IObjectCreatedEvent fired for a Shortcut (IShortcut).']
>>> folder['dummy'].raw_target is not dummy
True
>>> folder['dummy'].raw_target is repo['dummy-2']
True>>> folder['dummy'].raw_target.id
'foo'
>>> folder.clear() # prepare for next testLinkingIn addition to the copy and move operations, the shortcut package offers up a
new ‘link’ operation: this creates a shortcut to the selected object. In the
case of linking a shortcut, the provided adapter links instead to the original
shortcut’s target.>>> from zope.app.container.constraints import contains
>>> class INoDummyContainer(interface.Interface):
... contains(ISpam) # won't contain shortcuts
...
>>> badcontainer = DummyContainer()
>>> interface.alsoProvides(badcontainer, INoDummyContainer)
>>> component.provideAdapter(adapters.ObjectLinkerAdapter)
>>> component.provideAdapter(adapters.ShortcutLinkerAdapter)
>>> dummy_linker = interfaces.IObjectLinker(dummy)
>>> shortcut_linker = interfaces.IObjectLinker(shortcut)
>>> verifyObject(interfaces.IObjectLinker, dummy_linker)
True
>>> verifyObject(interfaces.IObjectLinker, shortcut_linker)
True
>>> dummy_linker.linkable()
True
>>> shortcut_linker.linkable()
True
>>> dummy_linker.linkableTo(badcontainer)
False
>>> shortcut_linker.linkableTo(badcontainer)
False
>>> dummy_linker.linkableTo(folder)
True
>>> shortcut_linker.linkableTo(folder)
True
>>> dummy_linker.linkTo(badcontainer)
Traceback (most recent call last):
...
Invalid: ('Not linkableTo target with name', <DummyContainer...>, 'dummy')
>>> shortcut_linker.linkTo(badcontainer)
Traceback (most recent call last):
...
Invalid: ('Not linkableTo target with name', <DummyContainer...>, 'special_spam_shortcut')
>>> dummy_linker.linkTo(folder)
'dummy'
>>> showEvents()
['IObjectCreatedEvent fired for a Shortcut (IShortcut).']
>>> folder['dummy'].raw_target is dummy
True
>>> shortcut_linker.linkTo(folder)
'special_spam_shortcut'
>>> showEvents()
['IObjectCopiedEvent fired for a Shortcut (IShortcut).']
>>> folder['special_spam_shortcut'].raw_target is spam
True
>>> dummy_linker.linkTo(folder, 'dummy2')
'dummy2'
>>> showEvents()
['IObjectCreatedEvent fired for a Shortcut (IShortcut).']
>>> folder['dummy2'].raw_target is dummy
True
>>> shortcut_linker.linkTo(folder, 'shortcut2')
'shortcut2'
>>> showEvents()
['IObjectCopiedEvent fired for a Shortcut (IShortcut).']
>>> folder['shortcut2'].raw_target is spam
TrueCopying as LinkingFor some objects–immutable objects that are primarily stored in a repository,
for instance–having a copy gesture actually create a link may be desirable.
The adapters module provides an ObjectCopierLinkingAdapter for these use cases.
Whenever a copy is requested, a link is made instead. This adapter is not
registered for any interfaces by default: it is expected to be installed
selectively.>>> class IImmutableDummy(IDummy):
... pass
...
>>> immutable_dummy = Dummy()
>>> interface.directlyProvides(immutable_dummy, IImmutableDummy)
>>> originalcontainer = DummyContainer()
>>> originalcontainer['immutable_dummy'] = immutable_dummy
>>> immutable_dummy.__name__ = 'immutable_dummy'
>>> immutable_dummy.__parent__ = originalcontainer
>>> component.provideAdapter(
... adapters.ObjectCopierLinkingAdapter, adapts=(IImmutableDummy,))
>>> copier = copypastemove.IObjectCopier(immutable_dummy)
>>> copier.copyable()
True
>>> copier.copyableTo(badcontainer)
False
>>> copier.copyableTo(folder)
True
>>> copier.copyTo(folder)
'immutable_dummy'
>>> showEvents()
['IObjectCreatedEvent fired for a Shortcut (IShortcut).']
>>> folder['immutable_dummy'].raw_target is immutable_dummy
True>>> event.subscribers.pop() is not None # cleanup
TrueShortcut IAddingThe shortcut adding has a couple of different behaviors than the standard Zope
3 adding. The differences are to support traversal proxies; and to provide
more flexibility for choosing the nextURL after an add.Supporting Traversal ProxiesBoth the action method and the nextURL method redirect to the absoluteURL of
the container in the zope.app implementation. In the face of shortcuts and
traversal proxies, this can generate surprising behavior for users, directing
their URL to a location other than where they thought they were working. The
shortcut adding changes both of these methods to use traversedURL instead. As
a result, adding to a shortcut of a container returns the user to the
shortcut, not the absolute path of the container’s real location; and
submitting the form of the default view of the adding redirects to within the
context of the traversed shortcut(s), not the absoluteURL.The action method changes are pertinent to redirecting to an adding view.>>> from zc.shortcut import adding, interfaces
>>> from zope import interface, component
>>> from zope.location.interfaces import ILocation
>>> class ISpam(interface.Interface):
... pass
...
>>> class Spam(dict):
... interface.implements(ISpam, ILocation)
... def __init__(self, parent, name):
... self.__parent__ = parent
... self.__name__ = name
...
>>> from zope.traversing.interfaces import IContainmentRoot
>>> class DummyContainmentRoot(object):
... interface.implements(IContainmentRoot)
...
>>> root = DummyContainmentRoot()
>>> real_parent = Spam(root, 'real_parent')
>>> target = Spam(real_parent, 'target')
>>> from zc.shortcut.shortcut import Shortcut
>>> shortcut = Shortcut(target)
>>> shortcut_parent = Spam(root, 'shortcut_parent')
>>> shortcut.__parent__ = shortcut_parent
>>> shortcut.__name__ = 'shortcut'
>>> from zc.shortcut import adapters
>>> component.provideAdapter(adapters.TraversedURL)
>>> component.provideAdapter(adapters.FallbackTraversedURL)
>>> component.provideAdapter(adapters.RootTraversedURL)
>>> from zope.publisher.interfaces import IRequest
>>> @component.adapter(interfaces.IAdding, IRequest)
... @interface.implementer(interface.Interface)
... def dummyAddingView(adding, request):
... return 'this is a view'
...
>>> component.provideAdapter(dummyAddingView, name='foo_type')
>>> from zope.publisher.browser import TestRequest
>>> request = TestRequest()
>>> adder = adding.Adding(shortcut.target, request)
>>> adder.action('foo_type', 'foo_id')
>>> request.response.getHeader('Location')
'http://127.0.0.1/shortcut_parent/shortcut/@@+/foo_type=foo_id'The nextURL method changes are pertinent to the default behavior.>>> adder.contentName = 'foo_id'
>>> target['foo_id'] = Spam(target, 'foo_id')
>>> adder.nextURL()
'http://127.0.0.1/shortcut_parent/shortcut/@@contents.html'Adding Flexibility to ‘nextURL’The nextURL method in the zope.app implementation of an adding defines
precisely what the nextURL should be: the @@contents.html view of the context.
The shortcut adding recreates this behavior, but only after seeing if different
behavior has been registered.nextURL tries to find an adapter named with the constant in
zc.shortcut.interfaces.NEXT_URL_NAME, providing nothing, for the adding, the
new content as found in the container (so it may be a shortcut), and the
context. If an adapter is registered, it should be a string of the nextURL to
be used; this value will be returned. If no adapter is registered or the
registered adapter returns None, the @@contents.html view of the context is
returned.>>> @component.adapter(interfaces.IAdding, ISpam, ISpam)
... @interface.implementer(interface.Interface)
... def sillyNextURL(adding, content, container):
... return '%s class added "%s" to "%s"' % (
... adding.__class__.__name__,
... content.__name__,
... container.__name__)
...
>>> component.provideAdapter(sillyNextURL, name=interfaces.NEXT_URL_NAME)
>>> adder.nextURL()
'Adding class added "foo_id" to "target"'Shortcut factoriesShortcut factories are factories that place objects in a configured folder and
then return a shortcut to the new object. Because they create objects and
place them in containers, they fire an object creation event, and usually the
configured folder fires an object added event.>>> from zc.shortcut import factory, interfaces, Shortcut
>>> from zope import interface, component, event
>>> class IDummy(interface.Interface):
... pass
...
>>> from zope.location.interfaces import ILocation
>>> class Dummy(object):
... interface.implements(IDummy, ILocation)
... def __init__(self, *args, **kwargs):
... self.args = args
... self.kwargs = kwargs
...
>>> f = factory.Factory(Dummy, 'title', 'description')
>>> from zope.interface import verify
>>> verify.verifyObject(interfaces.IShortcutFactory, f)
TrueThe factory always returns an interface declaration for a shortcut from
getInterfaces, while getTargetInterfaces returns the declaration for the
created object.>>> f.getInterfaces() == interface.implementedBy(Shortcut)
True
>>> f.getTargetInterfaces() == interface.implementedBy(Dummy)
Truefactories will fail to create an object if a container has not been
registered as a repository.>>> f() # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ComponentLookupError: (<Dummy...>, <...IContainer>, 'shortcutTargetRepository')If we register a repository then the factory will fire a creation event, add
the object to the repository, and return a shortcut to the new object.>>> import zope.app.container.interfaces
>>> class DummyContainer(dict):
... interface.implements(zope.app.container.interfaces.IContainer)
...
>>> repo = DummyContainer()
>>> @component.adapter(IDummy)
... @interface.implementer(zope.app.container.interfaces.IContainer)
... def DummyRepoGetter(content):
... return repo
...
>>> component.provideAdapter(
... DummyRepoGetter, name=interfaces.REPOSITORY_NAME)
>>> from zope.app.container.contained import NameChooser
>>> component.provideAdapter(NameChooser, adapts=(interface.Interface,))
>>> # now, before we actually actually run the adding machinery, we'll
>>> # set up some machinery that will let us look at events firing
...
>>> heard_events = [] # we'll collect the events here
>>> event.subscribers.append(heard_events.append)
>>> import pprint
>>> from zope import interface
>>> showEventsStart = 0
>>> def iname(ob):
... return iter(interface.providedBy(ob)).next().__name__
...
>>> def getId(ob):
... if ob is None or isinstance(ob, (int, float, basestring, tuple)):
... return "(%r)" % (ob,)
... id = getattr(ob, 'id', getattr(ob, '__name__', None))
... if not id:
... id = "a %s (%s)" % (ob.__class__.__name__, iname(ob))
... return id
...
>>> def showEvents(start=None): # to generate a friendly view of events
... global showEventsStart
... if start is None:
... start = showEventsStart
... res = [
... '%s fired for %s.' % (iname(ev), getId(ev.object))
... for ev in heard_events[start:]]
... res.sort()
... pprint.pprint(res)
... showEventsStart = len(heard_events)
...
>>> sc = f(12, 'foo', 'barbaz', sloop=19)
>>> showEvents()
['IObjectCreatedEvent fired for a Dummy (IDummy).']
>>> repo['Dummy'].args
(12, 'foo', 'barbaz')
>>> repo['Dummy'].kwargs
{'sloop': 19}
>>> sc.raw_target is repo['Dummy']
True>>> event.subscribers.pop() is not None # cleanup
TrueUsing alternate shortcut implementationsThe shortcut factory takes an optional keyword parameter to specify
the factory used to create the shortcut. By default,zc.shortcut.Shortcutis used, but more specialized shortcuts may be
needed for some applications. This allows the factory to be used
regardless of the specific shortcut implementation.Let’s create an alternate class that can be used as a shortcut (it
doesn’t really matter that the example class isn’t useful):>>> class AlternateShortcut(object):
... interface.implements(interfaces.IShortcut)
... def __init__(self, object):
... self.raw_target = object
... self.target = objectNow we can create a factory that creates instances of this class
instead of the default shortcut class:>>> f = factory.Factory(Dummy, 'title', 'description',
... shortcut_factory=AlternateShortcut)Using the factory returns an instance of our alternate shortcut
implementation:>>> sc = f(1, 2, 3)
>>> isinstance(sc, AlternateShortcut)
True
>>> isinstance(sc.raw_target, Dummy)
True
>>> sc.target.args
(1, 2, 3) |
zc.signalhandler | This package allows registration of signal handlers fromZConfigconfiguration files within the framework provided by the Zope Toolkit.Any number of handlers may be registered for any given signal.To use this from yourzope.conffile, ensure this package is
available to your application, and include a section like this:%import zc.signalhandler
<signalhandlers log-handling>
USR1 ZConfig.components.logger.loghandler.reopenFiles
USR1 yourapp.tasks.doSomethingUseful
</signalhandlers>See theREADME.txtinside thezc.signalhandlerpackage for
complete documentation.Release history1.2 (2010-11-12)Moved development to zope.org, licensed under the ZPL 2.1.1.1 (2007-06-21)This was a Zope Corporation internal release.Fix compatibility withzope.app.appsetup.product.1.0 (2007-06-21)Initial Zope Corporation internal release. |
zc.sourcefactory | Source FactoriesSource factories are used to simplify the creation of sources for certain
standard cases.Sources split up the process of providing input fields with choices for users
into several components: a context binder, a source class, a terms class, and a
term class.This is the correct abstraction and will fit many complex cases very well. To
reduce the amount of work to do for some standard cases, the source factories
allow users to define only the business relevant code for getting a list of
values, getting a token and a title to display.ContentsSource FactoriesSimple caseContextual sourcesFilteringScalingSimple caseWARNING about the standard adapters for ITermsMapping source valuesScalingCustom constructorsCommon adapters for sourcesISourceQueriablesCleanupBrowser views for sources created by source factoriesSimple useExtended use: provide your own titlesExtended use: provide your own tokensValue mapping sourcesContextual sourcesInterfacesTokensUnicodeIntegerPersistentInterfacesChanges2.0 (2023-02-23)1.1 (2018-11-07)1.0.0 (2016-08-02)1.0.0a1 (2013-02-23)0.8.0 (2013-10-04)0.7.0 (2010-09-17)0.6.0 (2009-08-15)0.5.0 (2009-02-03)0.4.0 (2008-12-11)0.3.5 (2008-12-08)0.3.4 (2008-08-27)0.3.3 (2008-06-10)0.3.2 (2008-04-09)0.3.1 (2008-02-12)0.3.0 (??????????)0.2.1 (2007-07-10)0.2.0 (2007-07-10)Simple caseIn the most simple case, you only have to provide a method that returns a list
of values and derive fromBasicSourceFactory:>>> import zc.sourcefactory.basic
>>> class MyStaticSource(zc.sourcefactory.basic.BasicSourceFactory):
... def getValues(self):
... return ['a', 'b', 'c']When calling the source factory, we get a source:>>> source = MyStaticSource()
>>> import zope.schema.interfaces
>>> zope.schema.interfaces.ISource.providedBy(source)
TrueThe values match ourgetValues-method of the factory:>>> list(source)
['a', 'b', 'c']
>>> 'a' in source
True
>>> len(source)
3Contextual sourcesSometimes we need context to determine the values. In this case, thegetValues-method gets a parametercontext.Let’s assume we have a small object containing data to be used by the source:>>> class Context(object):
... values = []>>> import zc.sourcefactory.contextual
>>> class MyDynamicSource(
... zc.sourcefactory.contextual.BasicContextualSourceFactory):
... def getValues(self, context):
... return context.valuesWhen instanciating, we get a ContextSourceBinder:>>> binder = MyDynamicSource()
>>> zope.schema.interfaces.IContextSourceBinder.providedBy(binder)
TrueBinding it to a context, we get a source:>>> context = Context()
>>> source = binder(context)
>>> zope.schema.interfaces.ISource.providedBy(source)
True>>> list(source)
[]Modifying the context also modifies the data in the source:>>> context.values = [1,2,3,4]
>>> list(source)
[1, 2, 3, 4]
>>> 1 in source
True
>>> len(source)
4It’s possible to have the default machinery return different sources, by
providing a source_class argument when calling the binder. One can also
provide arguments to the source.>>> class MultiplierSource(zc.sourcefactory.source.FactoredContextualSource):
... def __init__(self, factory, context, multiplier):
... super(MultiplierSource, self).__init__(factory, context)
... self.multiplier = multiplier
...
... def _get_filtered_values(self):
... for value in self.factory.getValues(self.context):
... yield self.multiplier * value
>>> class MultiplierSourceFactory(MyDynamicSource):
... source_class = MultiplierSource
>>> binder = MultiplierSourceFactory()
>>> source = binder(context, multiplier=5)
>>> list(source)
[5, 10, 15, 20]
>>> 5 in source
True
>>> len(source)
4FilteringAdditional to providing thegetValues-method you can also provide afilterValue-method that will allow you to reduce the items from the list,
piece by piece.This is useful if you want to have more specific sources (by subclassing) that
share the same basic origin of the data but have different filters applied to
it:>>> class FilteringSource(zc.sourcefactory.basic.BasicSourceFactory):
... def getValues(self):
... return iter(range(1,20))
... def filterValue(self, value):
... return value % 2
>>> source = FilteringSource()
>>> list(source)
[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]Subclassing modifies the filter, not the original data:>>> class OtherFilteringSource(FilteringSource):
... def filterValue(self, value):
... return not value % 2
>>> source = OtherFilteringSource()
>>> list(source)
[2, 4, 6, 8, 10, 12, 14, 16, 18]The “in” operator gets applied also to filtered values:>>> 2 in source
True
>>> 3 in source
FalseThe “len” also gets applied to filtered values:>>> len(source)
9ScalingSometimes the number of items available through a source is very large. So
large that you only want to access them if absolutely neccesary. One such
occasion is with truth-testing a source. By default Python will call
__nonzero__ to get the boolean value of an object, but if that isn’t available
__len__ is called to see what it returns. That might be very expensive, so we
want to make sure it isn’t called.>>> class MyExpensiveSource(zc.sourcefactory.basic.BasicSourceFactory):
... def getValues(self):
... yield 'a'
... raise RuntimeError('oops, iterated too far')>>> source = MyExpensiveSource()>>> bool(source)
TrueSimple caseIn the most simple case, you only have to provide a method that returns a list
of values and derive fromBasicSourceFactory:>>> import zc.sourcefactory.basic
>>> class MyStaticSource(zc.sourcefactory.basic.BasicSourceFactory):
... def getValues(self):
... return ['a', 'b', 'c']When calling the source factory, we get a source:>>> source = MyStaticSource()
>>> import zope.schema.interfaces
>>> zope.schema.interfaces.ISource.providedBy(source)
TrueThe values match ourgetValues-method of the factory:>>> list(source)
['a', 'b', 'c']
>>> 'a' in source
True
>>> len(source)
3WARNING about the standard adapters for ITermsThe standard adapters for ITerms are only suitable if the value types returned
by yourgetValuesfunction are homogenous. Mixing integers, persistent
objects, strings, and unicode within one source may create non-unique tokens.
In this case, you have to provide a customgetToken-method to provide unique
and unambigous tokens.Mapping source valuesSometimes a source provides the right choice of objects, but the actual values
we want to talk about are properties or computed views on those objects. Themapping proxy sourcehelps us to map a source to a different value space.We start out with a source:>>> source = [1,2,3,4,5]and we provide a method that maps the values of the original source to the
values we want to see (we map the numbers to the characters in the english
alphabet):>>> map = lambda x: chr(x+96)Now we can create a mapped source:>>> from zc.sourcefactory.mapping import ValueMappingSource
>>> mapped_source = ValueMappingSource(source, map)
>>> list(mapped_source)
['a', 'b', 'c', 'd', 'e']
>>> len(mapped_source)
5
>>> 'a' in mapped_source
True
>>> 1 in mapped_source
FalseYou can also use context-dependent sources:>>> def bindSource(context):
... return [1,2,3,4,5]
>>> from zc.sourcefactory.mapping import ValueMappingSourceContextBinder
>>> binder = ValueMappingSourceContextBinder(bindSource, map)
>>> bound_source = binder(object())
>>> list(bound_source)
['a', 'b', 'c', 'd', 'e']
>>> len(bound_source)
5
>>> 'a' in bound_source
True
>>> 1 in bound_source
FalseScalingSometimes the number of items available through a source is very large. So
large that you only want to access them if absolutely neccesary. One such
occasion is with truth-testing a source. By default Python will call
__nonzero__ to get the boolean value of an object, but if that isn’t available
__len__ is called to see what it returns. That might be very expensive, so we
want to make sure it isn’t called.>>> class ExpensiveSource(object):
... def __len__(self):
... raise RuntimeError("oops, don't want to call __len__")
...
... def __iter__(self):
... return iter(range(999999))>>> expensive_source = ExpensiveSource()
>>> mapped_source = ValueMappingSource(expensive_source, map)
>>> bool(mapped_source)
TrueCustom constructorsSource factories are intended to behave as natural as possible. A side-effect
of using a custom factory method (__new__) on the base class is that
sub-classes may have a hard time if their constructor (__init__) has a
different signature.zc.sourcefactory takes extra measures to allow using a custom constructor with
a different signature.>>> import zc.sourcefactory.basic>>> class Source(zc.sourcefactory.basic.BasicSourceFactory):
...
... def __init__(self, values):
... super(Source, self).__init__()
... self.values = values
...
... def getValues(self):
... return self.values>>> source = Source([1, 2, 3])
>>> list(source)
[1, 2, 3]This is also true for contextual sources. The example is a bit silly
but it shows that it works in principal:>>> import zc.sourcefactory.contextual
>>> default_values = (4, 5, 6)
>>> context_values = (6, 7, 8)
>>> class ContextualSource(
... zc.sourcefactory.contextual.BasicContextualSourceFactory):
...
... def __init__(self, defaults):
... super(ContextualSource, self).__init__()
... self.defaults = defaults
...
... def getValues(self, context):
... return self.defaults + context>>> contextual_source = ContextualSource(default_values)(context_values)
>>> list(contextual_source)
[4, 5, 6, 6, 7, 8]Common adapters for sourcesTo allow adapting factored sources specific to the factory, a couple of
standard interfaces that can be adapters are re-adapted as using a
multi-adapter for (FactoredSource, SourceFactory).ISourceQueriables>>> from zc.sourcefactory.basic import BasicSourceFactory
>>> class Factory(BasicSourceFactory):
... def getValues(self):
... return [1,2,3]
>>> source = Factory()>>> from zope.schema.interfaces import ISourceQueriables
>>> import zope.interface
>>> @zope.interface.implementer(ISourceQueriables)
... class SourceQueriables(object):
... def __init__(self, source, factory):
... self.source = source
... self.factory = factory
... def getQueriables(self):
... return [('test', None)]>>> from zc.sourcefactory.source import FactoredSource
>>> zope.component.provideAdapter(factory=SourceQueriables,
... provides=ISourceQueriables,
... adapts=(FactoredSource, Factory))>>> queriables = ISourceQueriables(source)
>>> queriables.factory
<Factory object at 0x...>
>>> queriables.source
<zc.sourcefactory.source.FactoredSource object at 0x...>
>>> queriables.getQueriables()
[('test', None)]Cleanup>>> zope.component.getSiteManager().unregisterAdapter(factory=SourceQueriables,
... provided=ISourceQueriables, required=(FactoredSource, Factory))
TrueBrowser views for sources created by source factoriesSources that were created using source factories already come with ready-made
terms and term objects.Simple useLet’s start with a simple source factory:>>> import zc.sourcefactory.basic
>>> class DemoSource(zc.sourcefactory.basic.BasicSourceFactory):
... def getValues(self):
... return [b'a', b'b', b'c', b'd']
>>> source = DemoSource()
>>> list(source)
[b'a', b'b', b'c', b'd']We need a request first, then we can adapt the source to ITerms:>>> from zope.publisher.browser import TestRequest
>>> import zope.browser.interfaces
>>> import zope.component
>>> request = TestRequest()
>>> terms = zope.component.getMultiAdapter(
... (source, request), zope.browser.interfaces.ITerms)
>>> terms
<zc.sourcefactory.browser.source.FactoredTerms object at 0x...>For each value we get a factored term:>>> terms.getTerm(b'a')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>
>>> terms.getTerm(b'b')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>
>>> terms.getTerm(b'c')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>
>>> terms.getTerm(b'd')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>Unicode values are allowed as well:>>> terms.getTerm('\xd3')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>Our terms are ITitledTokenizedTerm-compatible:>>> import zope.schema.interfaces
>>> zope.schema.interfaces.ITitledTokenizedTerm.providedBy(
... terms.getTerm('a'))
TrueIn the most simple case, the title of a term is the string representation of
the object:>>> terms.getTerm('a').title
'a'If an adapter from the value to IDCDescriptiveProperties exists, the title
will be retrieved from this adapter:>>> import persistent
>>> class MyObject(persistent.Persistent):
... custom_title = 'My custom title'
... _p_oid = 12
>>> class DCDescriptivePropertiesAdapter(object):
... def __init__(self, context):
... self.title = context.custom_title
... self.description = u""
>>> from zope.component import provideAdapter
>>> from zope.dublincore.interfaces import IDCDescriptiveProperties
>>> provideAdapter(DCDescriptivePropertiesAdapter, [MyObject],
... IDCDescriptiveProperties)
>>> terms.getTerm(MyObject()).title
'My custom title'Extended use: provide your own titlesInstead of relying on string representation or IDCDescriptiveProperties
adapters you can specify thegetTitlemethod on the source factory to
determine the title for a value:>>> class DemoSourceWithTitles(DemoSource):
... def getTitle(self, value):
... return 'Custom title ' + value.custom_title
>>> source2 = DemoSourceWithTitles()
>>> terms2 = zope.component.getMultiAdapter(
... (source2, request), zope.browser.interfaces.ITerms)
>>> o1 = MyObject()
>>> o1.custom_title = u"Object one"
>>> o2 = MyObject()
>>> o2.custom_title = u"Object two"
>>> terms2.getTerm(o1).title
'Custom title Object one'
>>> terms2.getTerm(o2).title
'Custom title Object two'Extended use: provide your own tokensInstead of relying on default adapters to generate tokens for your values, you
can override thegetTokenmethod on the source factory to determine the
token for a value:>>> class DemoObjectWithToken(object):
... token = None
>>> o1 = DemoObjectWithToken()
>>> o1.token = "one"
>>> o2 = DemoObjectWithToken()
>>> o2.token = "two">>> class DemoSourceWithTokens(DemoSource):
... values = [o1, o2]
... def getValues(self):
... return self.values
... def getToken(self, value):
... return value.token>>> source3 = DemoSourceWithTokens()
>>> terms3 = zope.component.getMultiAdapter(
... (source3, request), zope.browser.interfaces.ITerms)>>> terms3.getTerm(o1).token
'one'
>>> terms3.getTerm(o2).token
'two'Looking up by the custom tokens works as well:>>> terms3.getValue("one") is o1
True
>>> terms3.getValue("two") is o2
True
>>> terms3.getValue("three")
Traceback (most recent call last):
KeyError: "No value with token 'three'"Value mapping sourcesXXX to comeContextual sourcesLet’s start with an object that we can use as the context:>>> zip_to_city = {'06112': 'Halle',
... '06844': 'Dessa'}
>>> import zc.sourcefactory.contextual
>>> class DemoContextualSource(
... zc.sourcefactory.contextual.BasicContextualSourceFactory):
... def getValues(self, context):
... return context.keys()
... def getTitle(self, context, value):
... return context[value]
... def getToken(self, context, value):
... return 'token-%s' % value
>>> source = DemoContextualSource()(zip_to_city)
>>> sorted(list(source))
['06112', '06844']Let’s look at the terms:>>> terms = zope.component.getMultiAdapter(
... (source, request), zope.browser.interfaces.ITerms)
>>> terms
<zc.sourcefactory.browser.source.FactoredContextualTerms object at 0x...>For each value we get a factored term with the right title from the context:>>> terms.getTerm('06112')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>
>>> terms.getTerm('06112').title
'Halle'
>>> terms.getTerm('06844')
<zc.sourcefactory.browser.source.FactoredTerm object at 0x...>
>>> terms.getTerm('06844').title
'Dessa'
>>> terms.getTerm('06844').token
'token-06844'And in reverse we can get the value for a given token as well:>>> terms.getValue('token-06844')
'06844'InterfacesBoth the FactoredSource and FactoredContextualSource have associated
interfaces.>>> from zc.sourcefactory import interfaces
>>> from zc.sourcefactory import source
>>> from zope import interface
>>> interface.classImplements(
... source.FactoredSource, interfaces.IFactoredSource)
>>> interface.classImplements(
... source.FactoredContextualSource, interfaces.IContextualSource)TokensTokens are an identifying representation of an object, suitable for
transmission amongs URL-encoded data.The sourcefactory package provides a few standard generators for tokens:>>> import zc.sourcefactory.browser.tokenWe have generators for strings:>>> zc.sourcefactory.browser.token.fromString('somestring')
'1f129c42de5e4f043cbd88ff6360486f'UnicodeArgh, I have to write the umlauts as unicode escapes otherwise
distutils will have a encoding error in preparing upload to pypi:>>> zc.sourcefactory.browser.token.fromUnicode(
... 'somestring with umlauts \u00F6\u00E4\u00FC')
'45dadc304e0d6ae7f4864368bad74951'Integer>>> zc.sourcefactory.browser.token.fromInteger(12)
'12'Persistent>>> import persistent
>>> class PersistentDummy(persistent.Persistent):
... pass
>>> p = PersistentDummy()
>>> p._p_oid = 1234
>>> zc.sourcefactory.browser.token.fromPersistent(p)
'1234'If an object is persistent but has not been added to a database yet, it will
be added to the database of it’s __parent__:>>> root = rootFolder
>>> p1 = PersistentDummy()
>>> p1.__parent__ = root
>>> zc.sourcefactory.browser.token.fromPersistent(p1)
'0x01'If an object has no parent, we fail:>>> p2 = PersistentDummy()
>>> zc.sourcefactory.browser.token.fromPersistent(p2)
Traceback (most recent call last):
...
ValueError: Can not determine OID for <builtins.PersistentDummy object at 0x...>Security proxied objects are unwrapped to get to their oid or connection
attribute:>>> from zope.security.proxy import ProxyFactory
>>> p3 = PersistentDummy()
>>> root['p3'] = p3
>>> p3.__parent__ = root
>>> p3p = ProxyFactory(p3)
>>> p3p._p_jar
Traceback (most recent call last):
...
zope.security.interfaces.ForbiddenAttribute: ('_p_jar', <builtins.PersistentDummy object at 0x...>)>>> zc.sourcefactory.browser.token.fromPersistent(p3p)
'0x02'As a side-effectp3now has an _p_oid assigned. When an object already has
an OID the connection is not queried, so a __parent__ would not be necessary:>>> del p3.__parent__
>>> zc.sourcefactory.browser.token.fromPersistent(p3p)
'0x02'Interfaces>>> from zope.interface import Interface
>>> class I(Interface):
... pass
>>> zc.sourcefactory.browser.token.fromInterface(I)
'builtins.I'Changes2.0 (2023-02-23)Add support for Python 3.8, 3.9, 3.10, 3.11.Drop support for Python 2.7, 3.5, 3.6.1.1 (2018-11-07)Add support for Python 3.6 and 3.7.Drop support for Python 3.3 and 3.4.1.0.0 (2016-08-02)Claim support for Python 3.4 and 3.5.Drop support for Python 2.6.1.0.0a1 (2013-02-23)Added support for Python 3.3.Drastically reduce testing dependencies to make porting easier.Replaced deprecatedzope.interface.implementsusage with equivalentzope.interface.implementerdecorator.Dropped support for Python 2.4 and 2.5.0.8.0 (2013-10-04)BasicSourceFactorynow uses a class variable to tell what kind of
source to make. (Same mechanism as it was added forContextualSourceFactoryin version 0.5.0).0.7.0 (2010-09-17)Using Python’sdoctestinstead of deprecatedzope.testing.doctest.Usingzope.keyreferenceas test dependency instead ofzope.app.keyreference.0.6.0 (2009-08-15)Change package homepage to PyPI instead of Subversion.Dropped Support for Zope 3.2 by removing a conditional import.Use hashlib for Python 2.5 and later to avoid deprecation warnings.0.5.0 (2009-02-03)FactoredContextualSourceBinder.__call__ now accepts arguments giving the
args to pass to source class. ContextualSourceFactory now uses a class
variable to tell what kind of Source to make.Use zope.intid instead of zope.app.intid.Corrected e-mail address [email protected] been retired.0.4.0 (2008-12-11)Removed zope.app.form dependency. Changed ITerms import from
zope.app.form.browser.interfaces to
zope.browser.interfaces. [projekt01]0.3.5 (2008-12-08)Fixed bug in __new__ of contexual factories that would disallow
subclasses to use constructors that expect a different
signature. [icemac]0.3.4 (2008-08-27)Added all documents in package to long description, so they are
readable in pypi. [icemac]0.3.3 (2008-06-10)Fixed bug in __new__ of factories that would disallow subclasses to use
constructors that expect a different signature. (Thanks to Sebastian
Wehrmann for the patch.)0.3.2 (2008-04-09)Fixed scalability bug caused by missing __nonzero__ on ValueMappingSource0.3.1 (2008-02-12)Fixed scalability bug caused by missing __nonzero__ on BasicSourceFactory0.3.0 (??????????)Added class-level defaults for attributes that are declared in the
interfaces to not have the Zope 2 security machinery complain about
them.0.2.1 (2007-07-10)Fixed a bug in the contextual token policy that was handling the
resolution of values for a given token incorrectly.0.2.0 (2007-07-10)Added a contextual token policy interface that allows getToken and
getValue to access the cotext for contextual sources.Added a contextual term policy interface that allows createTerm and
getTitle to access the context for contextual sources.Added compatibility for Zope 3.2 and Zope 2.9 (via Five 1.3) |
zc.sourcerelease | ContentsCreating Source Releases from BuildoutsInstallationUsageRelease History0.4.0 (2012-12-17)0.3.1 (2009-09-25)0.3.0 (2008-11-21)New FeaturesBugs Fixed0.2 (2007-10-25)New FeaturesBugs Fixed0.1 (2007-10-24)DownloadCreating Source Releases from BuildoutsThe zc.sourcerelease package provides a script,
buildout-source-release, that generates a source release from a
buildout. The source release, is in the form of a gzipped tar archive[1]. The generated source release can be used as the
basis for higher-level releases, such as RPMs or
configure-make-make-install releases.The source releases includes data that would normally be installed in
a download cache, such as Python distributions, or downloads performed
by the zc.recipe.cmmi recipe. If a buildout uses a recipe that
downloads data but does not store the downloaded data in the buildout
download cache, then the data will not be included in the source
release and will have to be downloaded when the source release is
installed.The source release includes a Python install script. It is not
executable and must be run with the desired Python, which must be the
same version of Python used when making the release. The install
script runs the buildout in place. This means that the source release
will need to be extracted to and the install script run in the final install
location[2]. While the install script can be
used directly, it will more commonly be used by system-packaging
(e.g. RPM) build scripts or make files.InstallationYou can install the buildout-source-release script with easy install:easy_install zc.sourcereleaseor you can install it into a buildout using zc.buildout.UsageTo create a source release, simply run the buildout-source-release
script, passing a file URL or a subversion URL[3]and the name of the
configuration file to use. File URLs are useful for testing and can
be used with non-subversion source-code control systems.Let’s look at an example. We have a server with some distributions on
it.>>> index_content = get(link_server)
>>> if 'distribute' in index_content:
... lines = index_content.splitlines()
... distribute_line = lines.pop(1)
... lines.insert(4, distribute_line)
... index_content = '\n'.join(lines)
>>> print index_content,
<html><body>
<a href="index/">index/</a><br>
<a href="sample1-1.0.zip">sample1-1.0.zip</a><br>
<a href="sample2-1.0.zip">sample2-1.0.zip</a><br>
<a href="setuptools-0.6c7-py2.4.egg">setuptools-0.6-py2.4.egg</a><br>
<a href="zc.buildout-1.0-py2.4.egg">zc.buildout-1.0-py2.4.egg</a><br>
<a href="zc.buildout-99.99-pyN.N.egg">zc.buildout-99.99-pyN.N.egg</a><br>
<a href="zc.recipe.egg-1.0-py2.4.egg">zc.recipe.egg-1.0-py2.4.egg</a><br>
</body></html>We have the buildout-source-release installed in a local bin
directory. We’ll create another buildout that we’ll use for our
source release.>>> mkdir('sample')
>>> sample = join(sample_buildout, 'sample')
>>> write(sample, 'buildout.cfg',
... '''
... [buildout]
... parts = sample
... find-links = %(link_server)s
...
... [sample]
... recipe = zc.recipe.egg
... eggs = sample1
... ''' % globals())We’ll run the release script against this sample directory:>>> print system(join('bin', 'buildout-source-release')
... +' file://'+sample+' buildout.cfg'),
... # doctest: +ELLIPSIS
Creating source release in sample.tgz
...We end up with a tar file:>>> ls('.')
- .installed.cfg
d bin
- buildout.cfg
d develop-eggs
d eggs
d parts
d sample
- sample.tgzIf we want to give the file a custom name, in this case something other than
sample.tgz, we can use the ‘-n’ or ‘–name’ option to specify one:>>> print system(join('bin', 'buildout-source-release')
... +' file://'+sample+' buildout.cfg -n custom_name_one'),
... # doctest: +ELLIPSIS
Creating source release in custom_name_one.tgz
...>>> print system(join('bin', 'buildout-source-release')
... +' file://'+sample+' buildout.cfg --name custom_name_two'),
... # doctest: +ELLIPSIS
Creating source release in custom_name_two.tgz
...>>> ls('.')
- .installed.cfg
d bin
- buildout.cfg
- custom_name_one.tgz
- custom_name_two.tgz
d develop-eggs
d eggs
d parts
d sample
- sample.tgzLet’s continue with the example using sample.tgz. Extract the tar file to a
temporary directory:>>> mkdir('test')
>>> import tarfile
>>> tf = tarfile.open('sample.tgz', 'r:gz')
>>> for name in tf.getnames():
... tf.extract(name, 'test')
>>> tf.close()>>> ls('test')
d sample>>> ls('test', 'sample')
- buildout.cfg
d eggs
- install.py
d release-distributionsThe extracted sample directory has eggs for buildout and setuptools:>>> ls('test', 'sample', 'eggs')
- setuptools-0.6c7-py2.4.egg
d zc.buildout-99.99-py2.4.eggNote that version 99.99 of zc.buildout was used because it was the
most recent version on the link server. This happens to be different
than the version of buildout used by the source-release script.It has a release-distributions directory containing distributions
needed to install the buildout:>>> ls('test', 'sample', 'release-distributions', 'dist')
- sample1-1.0.zip
- sample2-1.0.zip
- zc.buildout-99.99-pyN.N.egg
- zc.recipe.egg-1.0.0b6-py2.4.egg(There normally aren’t distributions for buildout and setuptools, etc.
because these are pre-installed in the eggs directory of the source
release. In this case, we have a release for zc.buildout because it
was downloaded from the link server. Anything that we downloaded is
included.)So, now that we’ve extracted the source release we built, we can try
to install it. To do this, we’ll to run the installer. Before we do,
however, we’ll remove the data used by the link server:>>> import os
>>> mkdir('sample_eggs_aside')
>>> for p in os.listdir(sample_eggs):
... os.rename(join(sample_eggs, p), join('sample_eggs_aside', p))
>>> print get(link_server),
<html><body>
</body></html>This way, we know that when we run the source release, the
distributions will come from the release, not from the link
server. Now, let’s run the installer:>>> import sys>>> print system(sys.executable+' '+join('test', 'sample', 'install.py')),
... # doctest: +ELLIPSIS
Creating directory ...Running the installer simply builds out the saved buildout, using the
release-distribution as the source for installable eggs. In our case,
we get a sample script that we can run:>>> print system(join('test', 'sample', 'bin', 'sample1')),
Hello. My name is sample1Note that the sample bin directory doesn’t contain a buildout script:>>> ls('test', 'sample', 'bin')
- sample1If we want one, we can run the install script again with an argument
of ‘bootstrap’.>>> print system(sys.executable+
... ' '+join('test', 'sample', 'install.py bootstrap')),
Generated script '/sample-buildout/test/sample/bin/buildout'.>>> ls('test', 'sample', 'bin')
- buildout
- sample1Note that the install script is a specialized buildout script, so
other buildout options can be provided, although this shouldn’t
normally be necessary.Often, we’ll use file URLs for testing, but store the buildouts to be
released in a source code repository like subversion. We’ve created a
simple sample in subversion. Let’s try to install it:>>> print system(join('bin', 'buildout-source-release')+' '+
... 'svn://svn.zope.org/repos/main/zc.sourcerelease/svnsample'+
... ' release.cfg'),
... # doctest: +ELLIPSIS
Creating source release in svnsample.tgz
... The referenced section, 'repos', was not defined.The svnsample config, release.cfg, has:find-links = ${repos:svnsample}Here, the expectation is that the value will be provided by a user’s
default.cfg. We’ll provide a value that points to our link
server. First, we’ll put the sample eggs back on the link server:>>> for p in os.listdir('sample_eggs_aside'):
... os.rename(join('sample_eggs_aside', p), join(sample_eggs, p))
>>> remove('sample_eggs_aside')>>> print system(join('bin', 'buildout-source-release')+' '+
... 'svn://svn.zope.org/repos/main/zc.sourcerelease/svnsample'+
... ' release.cfg'+
... ' repos:svnsample='+link_server),
... # doctest: +ELLIPSIS
Creating source release in svnsample.tgz
...>>> ls('.')
- .installed.cfg
d bin
- buildout.cfg
- custom_name_one.tgz
- custom_name_two.tgz
d develop-eggs
d eggs
d parts
d sample
- sample.tgz
- svnsample.tgz
d test>>> mkdir('svntest')
>>> import tarfile
>>> tf = tarfile.open('svnsample.tgz', 'r:gz')
>>> for name in tf.getnames():
... tf.extract(name, 'svntest')
>>> tf.close()>>> print system(sys.executable
... +' '+join('svntest', 'svnsample', 'install.py')),
... # doctest: +ELLIPSIS
Creating directory ...>>> print system(join('svntest', 'svnsample', 'bin', 'sample')),
sample from svn calledYou can specify a different configuration file of course. Let’s
create one with an error as it contains an absolute path for the
eggs-directory.>>> write(sample, 'wrong.cfg',
... '''
... [buildout]
... parts = sample
... find-links = %(link_server)s
... eggs-directory = /somewhere/shared-eggs
...
... [sample]
... recipe = zc.recipe.egg
... eggs = sample1
... ''' % globals())We’ll run the release script against this configuration file:>>> print system(join('bin', 'buildout-source-release')
... +' file://'+sample+' wrong.cfg'),
... # doctest: +ELLIPSIS
Creating source release in sample.tgz
Invalid eggs directory (perhaps not a relative path) /somewhere/shared-eggs[1]It is possible that an option will be added in the
future to generate zip files rather than tar archives.[2]In the future, it is likely that we’ll
also support a model in which the install script can install to a
separate location. Buildouts will have to take this into account,
providing for copying necessary files, other than just scripts and
eggs, into the destination directory.[3]Other source
code control systems may be supported in the future. In the mean
time, you can check a project out to a directory and then use a file
URL to get the buildout-source-release script to use it.Release History0.4.0 (2012-12-17)Added distribute support.Symbolic links in projects are preserved.0.3.1 (2009-09-25)Fixed a latent bug that was exposed by recent changes to zc.buildout.The bug causes installation scripts included in source releases to fail.0.3.0 (2008-11-21)New FeaturesYou can now use a –name (or -n) option to specify the name for a
generated release.Bugs FixedHaving an absolute eggs-directory in buildout.cfg will now give an
error instead of running forever trying to find a relative path.0.2 (2007-10-25)New FeaturesAdded support for passing buildout option settings as command-line
options when building sources to supply values normally provided by
~/.buildout/default.cfg.Bugs FixedNon-standard eggs-directory settings weren’t handled correctly.0.1 (2007-10-24)Initial releaseDownload |
zc.sqs | This is a small wrapper around SQS that provides some testing support
and and some abstraction over the boto SQS APIs.There are 2 basic parts, a producer API and a worker API.Note that these APIs don’t let you pass AWS credentials. This means
that you must either pass credentials through ~/.boto configuration,
through environment variables, or through temporary credentials
provided via EC2 instance roles.Producing jobsTo send work to workers, instantiate a Queue:>>> import zc.sqs
>>> queue = zc.sqs.Queue("myqueue")
Connected to region us-east-1.The SQS queue must already exist. Creating queues is outside the
scope of these APIs. Trying to create a Queue instance with a
nonexistent queue name will result in an exception being raised.>>> import mock
>>> with mock.patch("boto.sqs.connect_to_region") as conn:
... conn().get_queue.return_value = None
... zc.sqs.Queue("nonexistent") # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
NonExistentQueue: nonexistentTo place data in the queue, you call it. You can pass positional,
and/or keyword arguments.>>> queue(1, 2, x=3)
[[1, 2], {'x': 3}]In this example, we’re running in test mode. In test mode, data are
simply echoed back (unless we wire up a worker, as will be discussed
below).Arguments must be json encodable.WorkersWorkers are provided as factories that accept configuration data and
return callables that are called with queued messages. A worker
factory could be implemented with a class that has __init__ and
__call__ methods, or with a function that takes configuration data and
returns a nested function to handle messages.Normally, workers don’t return anything. If the input is bad, the
worker should raise an exception. The exception will be logged, as
will the input data. If the input is good, but the worker can’t
perform the request, it should raise zc.sqs.TransientError to indicate
that the work should be retried later.ContainersTo attach your workers to queues, you use a container, which is just a
program that polls an SQS queue and calls your worker. There are
currently 2 containers:sequentialThe sequential container pulls requests from an SQS queue and hands
them to a worker, one at a time.This is a script entry point and accepts an argument list,
containing the path to an ini file. It uses “long polling” to loop
efficiently.testThe test container is used for writing tests. It supports
integration tests of producer and worker code. When running in
test mode, it replaces (part of) the sequential container.The sequential entry point takes the name of an ini file with 2 sections:containerThe container section configures the container with options:worker MODULE:exprThe worker constructorqueueThe name of an sqs queue to listen to.loggersA ZConfig-based logger configuration string.worker (optional)Worker options, passed to the worker constructor as a dictionary.If not provided, an empty dictionary will be passed.Here’s a simple (pointless) example to illustrate how this is wired
up. First, we’ll define a worker factory:def scaled_addr(config):
scale = float(config.get('scale', 1))
def add(a, b, x):
if x == 'later':
print ("not now")
raise zc.sqs.TransientError # Not very imaginative, I know
print (scale * (a + b + x))
return addNow, we’ll define a container configuration:[container]
worker = zc.sqs.tests:scaled_addr
queue = adder
loggers =
<logger>
level INFO
<logfile>
path STDOUT
format %(levelname)s %(name)s %(message)s
</logfile>
</logger>
<logger>
level INFO
propagate false
name zc.sqs.messages
<logfile>
path messages.log
format %(message)s
</logfile>
</logger>
[worker]
scale = 2Now, we’ll run the container.>>> import zc.thread
>>> @zc.thread.Thread
... def thread():
... zc.sqs.sequential(['ini'])We ran the container in a thread because it runs forever and wouldn’t
return.Normally, the entry point would run forever, but since we’re running
in test mode, the container just wires the worker up to the test
environment.Now, if we create a queue (in test mode):>>> adder = zc.sqs.Queue("adder")
Connected to region us-east-1.and send it work:>>> adder(1, 2, 3)
12.0
deleted '[[1, 2, 3], {}]'We see that the worker ran.We also see a testing message showing that the test succeeded.If a worker can’t perform an action immediately, it indicates that the
message should be delayed by raising TransientError as shown in the
worker example above:>>> adder(1, 2, 'later')
not nowIn this case, since the worker raised TransientError, the message
wasn’t deleted from the queue. This means that it’ll be handled later
when the job times out.If the worker rasies an exception, the exception and the message are
logged:>>> adder(1, 2, '') # doctest: +ELLIPSIS
ERROR zc.sqs Handling a message
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for +: 'int' and '...'
deleted '[[1, 2, ""], {}]'>>> with open("messages.log") as f:
... print(f.read())
[[1, 2, ""], {}]
<BLANKLINE>Silencing testsSometimes, you don’t want the testing infrastructure to output
information when sending messages. There testingsetUpmethod
adds ansqs_queuesattribute to globals. You can callbe_silentto make it stop outputting infomation:>>> sqs_queues.be_silent()After calling this, any subsequent queues will be quiet:>>> queue = zc.sqs.Queue("quiet")
>>> queue(1)You can get the queued data:>>> [m.get_body() for m in sqs_queues.get_queue("quiet").get_messages()]
['[[1], {}]']You can switch back to being noisy:>>> sqs_queues.be_silent()>>> queue = zc.sqs.Queue("loud")
>>> queue(1)Changes1.0.0Python 3 support.0.3.0 (2014-10-17)Use long polling instead of a configurable polling interval.0.2.1 (2013-05-15)Better error handling when SQS queues don’t exist.0.2.0 (2013-05-15)A new silent mode for test queues.0.1.0 (2013-04-23)Initial release. |
zc.sshtunnel | UNKNOWN |
zc.ssl | UNKNOWN |
zc.table | This is a Zope 3 extension that helps with the construction of (HTML) tables.
Features include dynamic HTML table generation, batching and sorting.CHANGES1.0 (2023-02-17)Drop support for Python 2.7, 3.5, 3.6.Add support for Python 3.5, 3.7, 3.8, 3.9, 3.10, 3.11.0.10.0 (2020-02-06)Make Python 3 compatible0.9.0 (2012-11-21)Using Python’sdoctestmodule instead of deprecatedzope.testing.doctest.Removed dependency onzope.app.testingandzope.app.form.Add test extra, movezope.testingto it.0.8.1 (2010-05-25)Replaced html entities with unicode entities since they are xthml valid.0.8.0 (2009-07-23)Updated tests to latest packages.0.7.0 (2008-05-20)Fixed HTML-encoding of cell contents forGetterColumn.Addhrefattributes (for MSIE) and fix up JavaScript on Next/Prev links
for batching.Update packaging.0.6 (2006-09-22)Initial release on Cheeseshop. |