**** BEGIN LOGGING AT Wed Mar 04 02:59:57 2020 Mar 04 11:36:30 I went through the datasheet and came to a conclusion that this battery ( https://amzn.to/2uREwii ) should do the job. Even if it isnt enough, we can add one more in series, should work well enough. and its light and portable enough to fit well into the case Mar 04 11:46:50 > <@freenode_Pac23:matrix.org> also lookup how you can run the bb on batteries that can fit into the case Mar 04 11:46:50 * I went through the datasheet and came to a conclusion that any 3.7V Lipo battery should do the job. Adding 2 of them in series, should work well enough. and its light and portable enough to fit well into the case Mar 04 11:51:00 I went through the datasheet and came to a conclusion that any 3.7 V LiPo should do the job. Adding 2 of them in series will work well enough. Moreover they are light and compact enough to fit into the case Mar 04 19:32:49 hmmm? Mar 05 01:11:40 Hello Sir @ds2 Mar 05 01:11:40 Can you share any repository or reference link so that I can understand the progress of works in both GLES and YOLO optimizations? Mar 05 01:11:40 It will be really helpful as I may get a direction in this limited time. Mar 05 01:12:46 there isn't a link for GLES... lookup GPGPU... most pages will talk about OpenGL... GLES is OpenGLES - an embedded subset Mar 05 01:12:58 for YOLO optimization, look up "NNpack" Mar 05 01:13:28 that basically brings in 2 things: ARM NEON support and use FFT instead of straight convolution Mar 05 01:13:45 (Convolve in time is multiplication in freq) Mar 05 01:15:30 Is this accleration package already being used in TIDL ? Mar 05 01:15:43 I mean NNpack Mar 05 01:16:40 no Mar 05 01:16:44 TIDL is completely seperate Mar 05 01:17:29 TIDL docs in available in 2 forms - the TI pages and the git tree (sources for ARM side; EVE/DSP is mostly binary blobs, IIRC) Mar 05 01:18:31 The git version (ti git) has almost no documentation. But the TI pages were good. Mar 05 01:20:00 yes... it is reading through the docs Mar 05 01:20:17 lots of C++ layers to deal with (Horrible, IMO) Mar 05 01:20:38 pretty much it comes down to figuring out the .txt file to pass in Mar 05 01:21:02 Yeah the configuration file... Mar 05 01:21:16 on the darknet end, I looked at it a bit but never chased down if it is possible to only GPU accelerate certain layers Mar 05 01:21:48 they have a CPU entry point and a GPU entry point but it is unclear if it is possible to set one to NULL and have it automatically use the right one Mar 05 01:23:25 Only some of the layers are supported for GPU, the rest need to be sent to the CPU. Mar 05 01:23:26 We could also try that later... perhaps Mar 05 01:30:11 * pradan[m] sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/nxDmBpyjRlfWmNEXBKqNQnwR > Mar 05 01:55:38 yes Mar 05 01:56:00 there are many ways to go... it would not hurt to look through github to see if there are other forks that may help Mar 05 01:56:26 some things for other boards may be applicable like the NNpack stuff was recommended for the RPi **** ENDING LOGGING AT Thu Mar 05 02:59:56 2020