
Muhammad Afif
25.01.2017
16:36:29

Alexander
25.01.2017
16:37:29
you can set size option. It may be 8, 16, 32, 64

Muhammad Afif
25.01.2017
16:38:47
If i what to limit my int into 2 digits only, like the varchar, is it posible to use size option ?
*want

Google

Alexander
25.01.2017
16:41:28
you can specify max=99. It is also possible to specify size=16 if the goal is to reduce database size

Muhammad Afif
25.01.2017
16:42:35

Mark
26.01.2017
14:40:34
Hi!
I need to divide my query results by chunks. Is there a conventional way to do this?
I've tried to treat query results like a usual iterable, but it is working unexpectedly
if i use something like:
def chunks(iterable, n):
it = iterable
while True:
chunk = tuple(itertools.islice(it, n))
if not chunk:
return
yield chunk
with for-loop like this:
for chunk in chunks(it, 300):
for l in chunk:
it is working forever
and if i use function like this:
def chunkify(it, n):
for i in range(0, len(it), n):
yield it[i:i + n]
with the same for-loop it works n times no matter how many records there was
Oh, I forgot to say that i use query with a collection, like this:
reviews = select((l.event_id, l.action,
s.id, s.geometry)
for l in Logged_actions
for s in Spot)


Alexander
26.01.2017
14:55:55
Pony fetches all selected rows at once. For typical web application a query result should not contains too many rows. If you want to fetch millions of rows and process them in a for-loop, that looks like an antipattern to me.
If you really need to process very big amount of rows, you can fetch each chunk of rows with a separate query:
last_id = 0
page_size = 100
while True:
with db_session:
objects = select(x for x in MyEntity if x.id > last_id).order_by(x.id)[:page_size]
if not objects:
break
for x in objects:
do_something_with(x)
last_id = chunk[-1].id
Note that I process each chunk of objects in a separate db_session. If I use the same session, then previously retrieved objects will continue sit in it until the end of db_session. In the future we can add a possibility to lazily fetch objects. But this is non trivial, because we need to garbage collect previously loaded objects, and this is not easy to do without exiting from db_session


Mark
26.01.2017
15:29:19
I'm trying to translate data from DB to elasticsearch index, thus it is not a web application and there can be a lot of rows.
Functions that I wrote previously work pretty well with usual query (select(x for x on x)), but do not work for a query with collection ((x, y) for x in x for y in y)

Святослав
26.01.2017
15:31:04
https://www.postgresql.org/docs/9.2/static/plpgsql-cursors.html
I think Pony has no support this, I just remember this feature in postgresql

Alexander
26.01.2017
15:34:51
PonyORM does not support PostgreSQL server-side cursors (yet?).
I think even if you query select pairs of objects
select((x,y) for x in X for y in Y)
You can add sensible order_by expression and split objects into chunks as in my example

Google

Святослав
26.01.2017
15:36:04
I think partial retrieve is better, cause explicit.
Like Alexander say before ?

Mark
26.01.2017
15:39:32
My goal was to reduce time and resources with removing extra queries. I dont see this as a good solution, if, instead of 1 query I perform 20 or 200. I will try adding .order_by() clause, but maybe you try and fix this?

Alexander
26.01.2017
15:53:19
I think it may be better to perform multiple big queries then one super-big query. But returning to the initial question:
> I need to divide my query results by chunks.
Why do you need this? My initial understanding was that a single query was bad for performance and you want to optimze it. Was my understanding correct?

Mark
26.01.2017
16:15:37
I need this because it is easier to feed chunks to elastic.
It is actually not so critical, but would be better if query results behaved similar.

Luckydonald
26.01.2017
23:47:33
@mshekhter Is that piece if code you write open source?
Because transferring data in an elastic search index seems to be something I need to do too, to improve performance
So maybe I could profit xD

Artur Rakhmatulin
27.01.2017
00:29:01
hello
How to specify a schema when using Oracle DB. My user hasn't a default schema
thx

Luckydonald
27.01.2017
00:32:45
But you'll need a desktop browser , can't use that mobile

Artur Rakhmatulin
27.01.2017
00:36:55
lol ) sorry for my english )
i mean SCHEMA for USER in OracleDB
I have _test_ user in DB.
For use _select_ queries i must write like _Select * From SCHEMA.TABLE_

Luckydonald
27.01.2017
00:46:46
Use ` to mark code in telegram:
`test`

Micaiah
27.01.2017
01:26:26
Can someone explain the StopIterator change? I'm getting a DeprecationWarning but I dont' know what to change it too
I'm on 3.6

Luckydonald
27.01.2017
01:28:42
Huh, not that I can help, but what is your code?

Micaiah
27.01.2017
01:29:38
for url in post.media:
if found >= count:
raise StopIteration

Google

Micaiah
27.01.2017
01:30:18
bears.py:65: DeprecationWarning: generator 'get_timeline_media' raised StopIteration

Alexander
27.01.2017
07:20:18
Beware, that in Oracle there is a restriction on an object name length - it is 30 char max. When pony create indexes and foreign keys, the default name of a foreign key looks like FK_TABLENAME__COLUMNNAME, and if the name exceed the limit of 30 characters it will be cut. It can lead to error where different constraints have the same name (after cut). It will be fixed in the next release.

Artur Rakhmatulin
27.01.2017
07:58:10
Thanks for the detailed answer

Mark
27.01.2017
09:04:46
@luckydonald actually not, but im thinking about making open-source module based on it
so, if u're interested, we can work together

Luckydonald
28.01.2017
12:41:21
Uh, not bad

Dave
31.01.2017
14:03:23
hey all, I'm having trouble with inserting a Decimal type into sqlite3. I know that sqlite3 doesn't have native Decimal types, rather it has a Numeric type...but Pony creates the schema as Decimal...
Is the canonical "fix" to create the schema as str and provide sqlite with an adapter?

Luckydonald
31.01.2017
14:22:23

Alexander
31.01.2017
14:39:21

Mark
31.01.2017
15:03:52
Hi everybody!
Tried to perform some update query via db.execute and got this:
pony.orm.dbapiprovider.ProgrammingError: can't adapt type 'builtin_function_or_method'
what is the reason of this error and how do i fix it?

Alexander
31.01.2017
15:10:42
Maybe you specify date module instead of date.date class or something like that?

Mark
31.01.2017
15:15:55
i've got from datetime import date

Alexander
31.01.2017
15:17:42
Can you show the traceback?


Mark
31.01.2017
15:17:59
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/pony/orm/dbapiprovider.py", line 48, in wrap_dbapi_exceptions
try: return func(provider, *args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/pony/orm/dbproviders/postgres.py", line 216, in execute
else: cursor.execute(sql, arguments)
psycopg2.ProgrammingError: can't adapt type 'builtin_function_or_method'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/mark/deep-fish-map/src/indexer.py", line 408, in <module>
main_loop()
File "/Users/mark/deep-fish-map/src/indexer.py", line 398, in main_loop
spind.clarify()
File "/Users/mark/deep-fish-map/src/indexer.py", line 123, in clarify
self._make_clarification(id)
File "<string>", line 2, in _make_clarification
File "/usr/local/lib/python3.5/site-packages/pony/orm/core.py", line 413, in new_func
try: return func(*args, **kwargs)
File "/Users/mark/deep-fish-map/src/indexer.py", line 152, in _make_clarification
db.execute(sql)
File "<string>", line 2, in execute
File "/usr/local/lib/python3.5/site-packages/pony/utils/utils.py", line 58, in cut_traceback
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/pony/orm/core.py", line 628, in execute
return database._exec_raw_sql(sql, globals, locals, frame_depth=3, start_transaction=True)
File "/usr/local/lib/python3.5/site-packages/pony/orm/core.py", line 640, in _exec_raw_sql
return database._exec_sql(adapted_sql, arguments, False, start_transaction)
File "/usr/local/lib/python3.5/site-packages/pony/orm/core.py", line 703, in _exec_sql
connection = cache.reconnect(e)
File "/usr/local/lib/python3.5/site-packages/pony/orm/core.py", line 1511, in reconnect
if not provider.should_reconnect(exc): reraise(*sys.exc_info())
File "/usr/local/lib/python3.5/site-packages/pony/utils/utils.py", line 85, in reraise
try: raise exc.with_traceback(tb)
File "/usr/local/lib/python3.5/site-packages/pony/orm/core.py", line 701, in _exec_sql
try: new_id = provider.execute(cursor, sql, arguments, returning_id)
File "<string>", line 2, in execute
File "/usr/local/lib/python3.5/site-packages/pony/orm/dbapiprovider.py", line 50, in wrap_dbapi_exceptions
except dbapi_module.ProgrammingError as e: raise ProgrammingError(e)
pony.orm.dbapiprovider.ProgrammingError: can't adapt type 'builtin_function_or_method'


Alexander
31.01.2017
16:22:37
In your _make_clarification method you execute some raw SQL query in line 152:
db.execute(sql)
In that query you use some parameter. It seems that you use some function call to evaluate param value and forgot parentheses after the function call. Something like:
x = date.today
sql = 'select a from Table1 where b < $x'
It should be instead:
x = date.today()
sql = 'select a from Table1 where b < $x'

Mark
31.01.2017
16:49:05
хм
ой

Google

Mark
31.01.2017
16:49:35
thank you!

Alexander
31.01.2017
16:49:41
Sure

Micaiah
01.02.2017
00:06:50
Has there been any work done on something like Flask-Admin for PonyORM

Alexey
01.02.2017
07:54:23

Luckydonald
01.02.2017
16:31:23
That doesn't sound like a good idea.
Here is why.
- Flask-Admin is open source
The editor isn't. I can't validate what it does with my database.
- Flask-Admin is something you have on your own server.
This means you don't have to expose your database to the public to make the external web service https://editor.ponyorm.com work.
Also that means someone evil at ponyorm could change data in your database If they are already connected. Not saying you guys are evil, but it introduces an unnecessary security risk.
Also, if my domain is example.com, I'd rather browse to example.com/admin than editor.ponyorm.com/username/project/admin
To sum it up, how I understood it, I don't think that is a practical idea, and nothing I would be comfortable using.


Romet
01.02.2017
17:05:10
There's no way they plan it to be an external service
That would be ridiculous

Alexander
01.02.2017
17:13:33
We have the following scenario in mind:
1) A developer designs some ER-diagram
2) After that he has a database in the cloud which corresponds to that diagram
3) Then he can populate database via automatically generated admin interface, to be sure that the database is designed correctly
4) After that he get automatically generated backend API which allows him to work with this data programmatically
5) Then he can develop frontend/mobile application which speaks to that API
6) If he changes the diagram, the database migrates automatically and the API changes accordingly
7) He can download resulted backend application to his local server or continue to use it from the cloud

Alexey
01.02.2017
17:13:40

Luckydonald
01.02.2017
17:42:17

Alexander
01.02.2017
17:45:55
No, why phone back? I think we speak about different things

Luckydonald
01.02.2017
17:48:20

Romet
01.02.2017
17:50:51
Probably not
Bad example, you probably want to run your own instance of Sentry anyway
We do, at least

Alexander
01.02.2017
17:51:22
Sure, you will get a full application which can be deployed on your own server and used without any cloud service. The cloud backend is just an option for those developers who want to have "dumb" backend for their mobile applications, and don't want to mess with administering a server

Romet
01.02.2017
17:51:44
Yeah that's reasonable

Google

Luckydonald
01.02.2017
21:00:23

Micaiah
01.02.2017
21:01:31
+

Luckydonald
01.02.2017
21:02:20
I was mistaken than, sorry.

Святослав
04.02.2017
03:42:34
I have this code: http://pastebin.com/0ztWZmRu (just pseudo code)
And when 100 threads writing some data i got IntegrityError (cause obj A or C can be written in different threads).
Then i add to db_session retry=1. And now i sometimes got error like "mix objects from different transactions".
I have a 2 questions:
1. What a right way to concurrently upsert objects?
2. How correctly retry writing on TransactionError (IntegrityError)?


Alexander
04.02.2017
12:51:56
Hi Святослав!
1) We need to add support of native PostgreSQL insert ... on conflict update .... Until that, retry option should works. But retry=1 is not enough, because once in a while the same error can occure again. I'd use retry=5 or retry=10.
2) Nested db_sessions are ignored, only the topmost one is taken into account. You need to apply @db_session to a function which wraps the entire atomic transaction, like, handling of HTTP request or loading a next batch of objects. The write method in your example is just a part of transaction, and donsn't look like a good candidate for @db_session.

Romet
04.02.2017
13:02:56
I'd just like to point out how much I appreciate that you guys avoid talking in slavic languages here

Luckydonald
04.02.2017
13:48:13
Totally agree. This is fantastic!

Artur Rakhmatulin
04.02.2017
13:58:07
?

Святослав
04.02.2017
14:15:12
I mean each thread start transaction, and only one way to collide with different thread is Pony internals. Cause after my write method objects not reused
PS: My previous point to "obj A or C can be written in different threads" is incorrect. Object and their depends will be written in the same thread.