@ponyorm

Страница 14 из 75
Luckydonald
22.12.2016
15:13:37
It would say AttributeError: p.stickers.count

Alexander
22.12.2016
15:13:38
We need to add it to translator

It seems it is missed right now

Luckydonald
22.12.2016
15:18:24
The one at the top
AttributeError: Entity Sticker does not have attribute sticker: s.sticker Py: packs_n_sum = orm.select((p.title, p.url, orm.count(st for st in p.stickers)) for p in StickerPack if not exclude_single or orm.count(p.stickers) > 1).order_by(orm.desc(3), 1).limit(limit) SQL: SELECT "p"."title", "p"."url", ( SELECT COUNT(DISTINCT ("st"."file_id", "st"."emoji")) FROM "sticker" "st" WHERE "p"."url" = "st"."sticker_pack_url" ) FROM "stickerpack" "p" LEFT JOIN "sticker" "sticker" ON "p"."url" = "sticker"."sticker_pack_url" GROUP BY "p"."title", "p"."url" HAVING (%(p1)s OR COUNT(DISTINCT case when ("sticker"."file_id", "sticker"."emoji") IS NULL then null else ("sticker"."file_id", "sticker"."emoji") end) > 1) ORDER BY 3 DESC, 1 LIMIT 100

Google
Alexander
22.12.2016
15:21:19
> AttributeError: Entity Sticker does not have attribute sticker: s.sticker But then you show some SQL. I don't understand what do you mean - either you have AttributeError or you have a result of translation

Alexander
22.12.2016
15:24:48
By the way, it seems that Sticker entity has composite key. I think in your case it may be better to give it autoincremental id as a primary key and define secondary composite key (file_id, emoji)

Alexander
22.12.2016
15:25:22
And why this key is composite? Shouldn't file_id to be primary key in itself?

Alexander
22.12.2016
15:31:55
Ok, I reproduced the bug with count. I'll fix it

Google
Alexander
22.12.2016
16:01:52
What is the query you are trying to execute at this moment?

Luckydonald
22.12.2016
16:03:20
https://docs.ponyorm.com/api_reference.html#QueryStat
Are the timings in the QueryStat object seconds or microseconds?

Alexander
22.12.2016
16:03:36
Seconds

sum_time is a cumulative time for all execution of the same query

Luckydonald
22.12.2016
16:05:17
packs_n_sum = orm.select((p.title, p.url, orm.count(st for st in p.stickers)) for p in StickerPack if not exclude_single or orm.count(st for st in p.stickers) > 1).order_by(orm.desc(3), 1).limit(limit) sum_time: 1.5191729068756104 seconds(s), Executed 3 time(s)

Now I get it. Same query since start of program

Alexander
22.12.2016
16:07:04
You probably more interested in avg_time, or max_time

Luckydonald
22.12.2016
16:07:40
Isn't there a "last_time"?

To get the value for the last execution?

Alexander
22.12.2016
16:09:49
I don't think you really need a last time. But if you want, you can call db.merge_local_stats() before execution of your query. It will clear all local statistics

Luckydonald
22.12.2016
16:10:47
Yeah, I'll go with average.

Alexander
22.12.2016
17:12:14
I recommend you to add a column stickers_count to StickerPack and use it in queries. This way queries will be much more fast. You can update the column value manually or do it with a simple trigger

Also, fix primary key of Sticker, it most probably should be just file_id

Alexander
22.12.2016
17:35:50
Not yet, but it is relatively simple to write such a trigger manually. It will look something like that: CREATE TRIGGER sticker_count_trigger AFTER INSERT ON sticker FOR EACH ROW EXECUTE PROCEDURE sticker_count_update(); CREATE OR REPLACE FUNCTION sticker_count_update() RETURNS trigger AS $BODY$ begin update "stickerpack" set sticker_count = ( select count(*) from "sticker" st where st.sticker_pack = new.sticker_pack ) where id = new.sticker_pack; end if; return new; end $BODY$ LANGUAGE plpgsql VOLATILE;

Luckydonald
22.12.2016
19:03:50
Not yet, but it is relatively simple to write such a trigger manually. It will look something like that: CREATE TRIGGER sticker_count_trigger AFTER INSERT ON sticker FOR EACH ROW EXECUTE PROCEDURE sticker_count_update(); CREATE OR REPLACE FUNCTION sticker_count_update() RETURNS trigger AS $BODY$ begin update "stickerpack" set sticker_count = ( select count(*) from "sticker" st where st.sticker_pack = new.sticker_pack ) where id = new.sticker_pack; end if; return new; end $BODY$ LANGUAGE plpgsql VOLATILE;
Yup, worked. Moved the function first, and removed that "end if;" CREATE OR REPLACE FUNCTION sticker_count_update() RETURNS trigger AS $BODY$ begin update "stickerpack" set sticker_count = ( select count(*) from "sticker" st where st.sticker_pack = new.sticker_pack ) where id = new.sticker_pack; return new; end $BODY$ LANGUAGE plpgsql VOLATILE; CREATE TRIGGER sticker_count_trigger AFTER INSERT ON sticker FOR EACH ROW EXECUTE PROCEDURE sticker_count_update();

Alexander
22.12.2016
19:04:50
You need to define sticker_count attribute as a volatile: sticer_count = Required(int, default=0, volatile=True)

Luckydonald
22.12.2016
19:06:02
What does that mean?

Google
Luckydonald
22.12.2016
19:06:59
And how can I create that per SQL?

*create

Alexander
22.12.2016
19:09:52
That it can change in the database even if it was not modified from Python, and it should not cause OptimisticCheck error

Luckydonald
22.12.2016
19:11:16
Oh, okey. I thought that would be a SQL parameter. Like deferred

Alexander
22.12.2016
19:13:27
No, it just Pony option for cache handling

Are there any contribution guidelines for the project?
Hi Nate, sorry for the late response. At this moment we have no specific contribution guidelines, but I think we should to write them. Historically the most of PonyORM code was written by a single person (me). It has the benefit that the code is consistent and does not contain internal architectural contradictions. But at this moment the code may looks not too friendly to external contributors. For example, it has some formatting principles which a differ from PEP8 guidelines. I think that before releasing Pony 1.0 we should reformat the code according to PEP8 rules, although I suspect that after that it will be harder to me to work with the code, because current unorthodoxal formatting has some benefits to me which will be lost. In general it may be not easy to accept contributions to PonyORM from outer developers. The problem that PonyORM solves is pretty complex, and each piece of new functionality should be correct in handling of many different kinds of issues simultaneously. When new feature is added to PonyORM, the potential changes should be made to translator, which generates SQL text for different databases, to handling of cache, composite keys, transactions, optimistic updates, etc. etc., and typically the pull request from external developer contains only a small part of all necessary functionality. Explaining all that details to each external developer who make a pull request may be tiresome and unproductive, sometimes it is easier to implement necessary functionality by hands of small group of core developers. On the other hand, I think we should gain experience in communication with contributors, because such a complex project cannot be developed successfully by forces of one or two developers. We need to attract more core developers, so maybe we need to change our attitude to external contributions. It will slow us down, but may pay off in the end.

Are there any contribution guidelines for the project?
Now to the pull request your student made. If I understand correctly, the student's main goal is to gain experiense on working with (and contributing to) an OSS project. In this case it may be easier to him to contribute to some project which already has established contributing guidelines and the projecrt maintainers have an experiense of accepting of external contribution. PonyORM was released under Apache license just recently, and our experience in dealing with external contributions is still limited at this point. I hope that PonyORM will gradually become more open to such contributions. The pull request in itself looks good. We have some internal disputes regarding should we accept it or not because of the following reason. Currently we maintan documentation in a separate repository. There ase some benefits in having documentation and code in the same repository and combining code change with corresponding documentation change. But this is convenient when code and documentation are writen by the same person. Currently in PonyORM there are separate persons who write code and documentation. Because of that it is more convenient to us to have two different repositories. This way the probility of merge conflicts are reduced, and unreleased code branches can be rebased more easily. The pool request of your student suggests to include documentation into code in form of docstrings. It may be convenient for a programmer who want to study the code. But after that change the documentation will be duplicated in a two different places - in a separate documentation repository and inside docstrings. It will increase the cost of writing documentation, because now it will be necessary not to forget to update the corresponding place inside some docstring when the main documentation is changed. It is possible to autogenerate documentation from the docstrings, but it will require to us to change the documentation building process, and currently we are not ready for that. Also, it is more convenient to work with documentation when it is contained in a one textual file and not scattered across a multiple docstrings. Because of that, at this moment we are more inclined to work with a documentation which a stored in a small number of a textual files in a separate repository. I think we are ready to accept the pull request and add docstring to the code if someone will maintain them in the future and check that they are correspond with the main documentation.

KF
23.12.2016
08:19:23
Hello everybody! Please help me with sharding. I need to have multiple connections to the databases and for each I need to initialize the same models. Is it right to make an array of "connection-> Model" and to apply to requests to models not directly "select (c for c in Users)", as well as "select (c for c in models.users)" For example (pseudocode): db1 = Database () db2 = Database () databases = [db1, db2] models = {} for db in databases: dbModels = [] class Users (db.Entity): name = Required (str) users = Users () dbModels.append (users) models.append (db, dbModels) The documentation I could not find a solution for my case, but to complete the duplication of code types of models: class Users_1 (db1.Entity): ... class Users_2 (db2.Entity): ...

Alexander
23.12.2016
08:49:22
You can define entities inside a function: def define_models(db): class Person(db.Entity): name = Reqiured(str) contacts = Set("Contact") class Contact(db.Entity): person = Requied("Person") type = Requited(str) value = Required(str) And then use that function to define the same entities for different databases: def make_database(*args, **kwargs): db = Database() define_entities(db) db.bind(*args, **kwargs) return db After the entity is defined, you can access it as an attribute of db object. That is, you can write: db = make_database('sqlite', ':memory:') db.generate_mapping() with db_session: p1 = db.Person(name='JohnSmith') c1 = db.Contact(person=p1, type='skype', value='john.smith') persons = select(p for p in db.Person if p.name.startswith('J'))[:] for p in persons: print(p.name) Edit: forgot to add db.generate_mapping() call

KF
23.12.2016
09:15:05
Alexander, thanks! That's what I need! PS: Огромное спасибо за библиотеку, начал использовать Pony после того как посмотрел ваши доклады на youtube ?

Alexander
23.12.2016
09:15:29
Thanks for the kind words :)

Luckydonald
23.12.2016
11:16:52
Alexander, thanks! That's what I need! PS: thank you so much for the library started to use a Pony after watching your talks on youtube ?

KF
23.12.2016
13:55:15
On more question. If i have 2 db connections an run following: @db_session def add_records(): db1.insert("..."); db1.moreActions(); ... db2.insert("..."); db1.moreActions(); ... Is "@db_session" start separate transactions in each database?

Alexander
23.12.2016
13:55:32
yes

KF
23.12.2016
13:55:53
thanks

Alexander
23.12.2016
13:58:38
Pony does not suppot distributed transactions yet, but attempts to do intelligent commit. At first Pony chooses the "main" database. By default it is the database which was used frist, but it is possible to set the main database explicitly. At the commit Pony tries to perform commit for the main database. If the commit was not successfull, Pony perform rollback for all other databases. If the commit to the main database was successful, Pony try to commit all other databases.

Micaiah
24.12.2016
00:09:08
Why does PonyORM use groups and permissions for to_json but not to_dict?

Alexander
24.12.2016
00:57:37
to_dict was added before groups and permissions

Actually, we discovered some drawbacks with to_json, and want to replace it with a better API

Google
Micaiah
24.12.2016
01:00:29
I'm trying to setup a simple JSON API with hug and pony and wanted to use to_json, but groups/permissions seem to make it harder than it should be.

Alexander
24.12.2016
01:02:12
At this moment I recommend to use to_dict

Micaiah
24.12.2016
01:02:35
Thanks! Any idea of what will replace to_json?

Luckydonald
24.12.2016
01:06:00
:D

Micaiah
24.12.2016
01:07:02
I've had too many experiences like this googling man pages

Alexander
24.12.2016
01:07:26
select(s for s in Student if s.name.startswith('J')).extract(""" id, name, gpa, group { number, major, faculty { number, description }, courses foreach(c) where(c.credits > 20) orderby(c.name) limit(10, offset=20) { id, name, credits } } """)

The result will be JSON-serializable structure

Micaiah
24.12.2016
01:10:31
Thanks! Looks great.

Alexander
24.12.2016
01:10:51
It will be a general query language for hierarchical queries with embedded Python expressions

Micaiah
24.12.2016
01:13:38
Not directly related to Pony, but is there a way to say something like: For each of the keys in kwargs, Student(key=kwargs[key]) for creating a new Student object. I thought I did something like this in the past, but I can't think of the way I did it.

Alexander
24.12.2016
01:14:15
Student(**kwargs) ?

Micaiah
24.12.2016
01:14:53
..I feel dumb now. For some reason I thought I needed something more complicated than that.

Alexander
24.12.2016
01:15:02
:)

Micaiah
24.12.2016
01:20:02
I usually get better results with Google, but Duck Duck Go tends to nail it with tech results

I wish the hug docs had better descriptions on unit testing best practices

Nate
24.12.2016
03:40:42
Now to the pull request your student made. If I understand correctly, the student's main goal is to gain experiense on working with (and contributing to) an OSS project. In this case it may be easier to him to contribute to some project which already has established contributing guidelines and the projecrt maintainers have an experiense of accepting of external contribution. PonyORM was released under Apache license just recently, and our experience in dealing with external contributions is still limited at this point. I hope that PonyORM will gradually become more open to such contributions. The pull request in itself looks good. We have some internal disputes regarding should we accept it or not because of the following reason. Currently we maintan documentation in a separate repository. There ase some benefits in having documentation and code in the same repository and combining code change with corresponding documentation change. But this is convenient when code and documentation are writen by the same person. Currently in PonyORM there are separate persons who write code and documentation. Because of that it is more convenient to us to have two different repositories. This way the probility of merge conflicts are reduced, and unreleased code branches can be rebased more easily. The pool request of your student suggests to include documentation into code in form of docstrings. It may be convenient for a programmer who want to study the code. But after that change the documentation will be duplicated in a two different places - in a separate documentation repository and inside docstrings. It will increase the cost of writing documentation, because now it will be necessary not to forget to update the corresponding place inside some docstring when the main documentation is changed. It is possible to autogenerate documentation from the docstrings, but it will require to us to change the documentation building process, and currently we are not ready for that. Also, it is more convenient to work with documentation when it is contained in a one textual file and not scattered across a multiple docstrings. Because of that, at this moment we are more inclined to work with a documentation which a stored in a small number of a textual files in a separate repository. I think we are ready to accept the pull request and add docstring to the code if someone will maintain them in the future and check that they are correspond with the main documentation.
Thank you so much for your thoughtful response. I've discussed your response with my student and he says he understands. He doesn't seem to have any expectation of contributing functionality to Pony; he just saw the opportunity to make a contribution and was excited to take a shot at it. Both of us are excited to see how Pony grow.

Micaiah
25.12.2016
18:25:55
When will Pony be available for 3.6?

Alexander
25.12.2016
21:07:28
I hope we can do release which supports Python 3.6 before New Year

Micaiah
25.12.2016
21:09:02
Perfect!

Anything simple I can do to help?

Google
Alexander
25.12.2016
21:11:27
Cannot suggest anything specific right now, maybe I can tell tomorrow

Micaiah
25.12.2016
22:13:22
Is there any word on when migration support is going to be added?

Luckydonald
27.12.2016
20:45:58
It is in the making, as far as I know But @akozlovsky will probably tell you more

Micaiah
27.12.2016
20:48:30
It is in the making, as far as I know But @akozlovsky will probably tell you more
That's great news. The lack of migrations was my only complaint using Pony on my senior project

Alexander
27.12.2016
20:50:06
We are trying to do release before the end of this week which includes support or Python 3.6 and first version of migrations. Migrations are almost ready, but during implementing of migrations it was necessary to change a way many-to-many intermediate tables are named. Previously the name was entity1_entity2, and now it will be named entity1_collection1. This way of naming is better, and the change is necessary even if a developer dosn't plan to use migrations. We need to have a way to upgrade previously created databases. Regarding Python 3.6 support, today I was able to pass all tests except one on Python 3.6, so it is almost ready.

Michael
27.12.2016
20:50:48
Great news

Alexander
27.12.2016
20:57:24
The changes are in decompiler, the bytecode was significantly changed

https://docs.python.org/3.6/whatsnew/3.6.html#optimizations

Micaiah
27.12.2016
20:59:06
The changes are in decompiler, the bytecode was significantly changed
Oh, that makes sense. I was curious about this as well.

Марк ☢
27.12.2016
20:59:48
They have made bytecode actually word-code (i.e. two-bytes per opcode)

Alexander
27.12.2016
21:02:17
Yes. Also the details of some bytecodes (such as MAKE_FUNCTION, CALL_FUNCTION_xxx, etc.) were changed, and some new bytecodes (like BUILD_CONST_KEY_MAP) were added

I've got a django project and a hug project going on, and I'd like to eventually run Pony with both.
I think the main difference between django and pony tables is inheritance. Django creates a different table for each subclass, and Pony keep all subclass columns inside the same table. In order to work with Django models we need to add Django-like inheritance to Pony, although it is less efficient than the way used in Pony. Before that Pony will treat Django subclass as a separate model related with a foreign key to a base model.

Страница 14 из 75