NetTalk Central

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - AtoB

Pages: 1 2 [3] 4 5
31
Web Server - Ask For Help / automatic webserver restart
« on: November 16, 2016, 01:54:04 PM »
Hi all,

I'm having some problems right now: webserver crashes irregulary and I'm intensively looking for the cause.

but meanwhile : is there some way I could automatically restart the webserver, in case of a crash? Currently the webserver is running as a normal exe, but eventually will run as a service

Thanks in advance,
Ton

32
Hi Kevin,

Hmmm, should I recreate my certificates with "127.0.0.1" or "localhost" then? Is that common for testing purposes?

I would expected the browser to complain about the domain not matching the certificate,

will try this evening ....

Thanks,
Ton

33
Hi All,

I'm trying to get my webservices secure, but can't get it to work in my test environment (locally, so I guess there is no firewall is involved ...)

I want all traffic with this server to be secure, so my documentation should only be visible when accessed over https and my methods too.

I created my CA certificate (see post last week: I did a OpenSSL (re-)install, it is now running version 1.1.0b, dated 26 sept 2016, but I don't think this is the culprit right now ...)

I added my CA-root certificate to both browsers (I use FF, but also tried IE), but both don't show my documentation, when I type "https://localhost:443/myservicename".

 - Firefox then shows the message something like "Unable to connect to localhost ... errorcode: SSL_ERROR_NO_CYPHER_OVERLAP"
- IE says I should activate TLS1.0 through 1.2 in my browser settings (which are activated ...)

I also tried calling a method from my (Clarion) webclient testing procedure (over port 443 and using the https "prefix"), but this gives "The error number was -53 which menas Open Timeout or Failure error - [SSL Error = 16].

The last error lead me to nettalk central (<g>) so I checked the netsimple code but it says "self.SSLMethod = NET:SSLMethodTLS" in the init method (and as far as I know, I'm not changing it ...)

the following is the code to activate ssl serverside (both files are present in the \certificates folder):

  ThisWebserver.SSL = 1 ! Use SSL to make a Secure Web Server
  ThisWebserver.SSLCertificateOptions.DontVerifyRemoteCertificateCommonName = 1
  ThisWebserver.SSLCertificateOptions.DontVerifyRemoteCertificateWithCARoot = 1
  ThisWebserver.SSLCertificateOptions.CertificateFile = 'certificates\webservice.crt'
  ThisWebserver.SSLCertificateOptions.PrivateKeyFile = 'certificates\webservice.key'
  ThisWebserver.SSLCertificateOptions.ServerName = 'www.tvdb.nl.crt'
  ThisWebserver.MoveFolder(clip('web') & '\certificates','certificates')

I'm not sure what the "ServerName" property should contain in my test environment (I also tried "webservice.crt" ...).

I'm out of ideas right now ... Is there a way to somehow trace where this stuff stops working. I don't see any request coming in at the NT server, but I don't now wether https request show up there  at all ?

Any help is really appreciated!

TIA,
Ton

34
Web Server - Ask For Help / Re: creating ca certificate ...
« on: November 11, 2016, 12:00:04 AM »
Hi Bruce,

just found out that I needed to install OpenSSL first, I wrongly assumed it was part of the os ... got my CA certificate!

Now diving into the next steps :-)

regards,
Ton

35
Web Server - Ask For Help / creating ca certificate ...
« on: November 08, 2016, 04:36:41 AM »
Hi all,

in order to secure a webservice I thought I'd create a certificate. Step one creating the CA certificate doens't seem to work:

- I run CreateCACertificate.bat
- I type a password twice

then it seems to go wrong, the follwing is displayed:


Code: [Select]
[color=blue]--- Create Certificate using Private Key
(Please enter the same password you used earlier when asked to do so)

WARNING: can't open config file: /usr/local/ssl/openssl.cnf
Unable to load config info from /usr/local/ssl/openssl.cnf


--- Display Certificate

WARNING: can't open config file: /usr/local/ssl/openssl.cnf
Error opening Certificate .\YourCARoot\cacert\YourCA.crt
5612:error:02001002:system library:fopen:No such file or directory:.\crypto\bio\bss_file.c:391:fopen('.\YourCARoot\cacert\YourCA.crt','rb')
5612:error:20074002:BIO routines:FILE_CTRL:system lib:.\crypto\bio\bss_file.c:393:
unable to load certificate
[/color]

It looks like a "openssl.cnf" file is missing somehow, at least I cannot find it ... I'm not getting (for example) a country code to billed in ...

- is this file required?
- are there any other requirements for this batch to complete?

TIA,
Ton



36
Web Server - Ask For Help / Re: jFiles : improving parsing speed?
« on: September 19, 2016, 12:35:53 PM »
Hi Bruce,

for the time being (hoping for a jFiles adjustment in the end of course :-))  I'll comment out the fillstructure .trace call. (B.t.w. I think the .cat method of stringtheory is called when loading/parsing json quite often too, see .handlechar ...), but really don't make it a priority, I'm more than happy with the improvements made so far.

regards,
Ton

37
Web Server - Ask For Help / Re: jFiles : improving parsing speed?
« on: September 18, 2016, 03:18:00 AM »
Hi Bruce,

just ran my first test with updated jFiles and Stringtheory.

Last week my method with "my" (optimised) version of jFiles ran a 5000 records json file in approx. 1.63 sec.

Then I applied most recent jFiles and Stringtheory and did a couple of runs: 1.66 secs. So roughly the same.

But!

Now I simply commented out all the self.trace calls in the .FillStructure method (jFiles) and did another set of runs with the same data: 1.43 secs !!!

So effectively you did a better job than I did (I suspect the stringtheory .cat method improves things greatly, which I didn't have in last weeks version either).

Some questions/remarks:

1 - do you also see such a remarkably speed improvement when commenting out the self.Trace calls (on a somewhat larger input set)?

- I did found out why I had so many calls too the self.Trace method via the .GetObject method:
I was calling

jsonParameter &= SELF.rJSON.GetByName('apiVersion')

 and when a property is not present it processes all objects (remarkebly fast b.t.w.), but calls the self.Trace method for each data element ... Now I'm issuing a

jsonParameter &= SELF.rJSON.GetByName('apiVersion',1)

(note the second parameter ...) and it only searches the top-level objects for this property

2 - suggestion related to the above: is it possible to simply get a couple of (simple) element values from the json prior to the complete parsing of the file? Sometimes I cannot control the incoming json and need some "strategic" info from the file that helps me decide how to process the thing (and wether I should process it at all). Currently I've created a procedure that seeks for a label (first one) and starts reading after that keyword until it decides it has the corresponding value. But it's not a clean implementation by all means. Would be much better if this is included in jFiles :-)

I've only tested parsing and haven't tried fastmem with the webservice yet. I'll first have some internal optimisations I still can look into, afterwards I'll see what effect the fastmem has on the whole thing. Keep you posted

Thanks so far, great improvements !

(less than two weeks ago this set took 2.24 secs to run, and now 1.43, If we can keep this pace, we'll be under .50 seconds by the end of the year :-) )

regards
Ton




38
Web Server - Ask For Help / Re: jFiles : improving parsing speed?
« on: September 16, 2016, 04:29:56 AM »
Hi Bruce,

if my wife lets me, I'will try the new versions this weekend, otherwise it will be early next week :-)

Thanks for great support, will let you know the results

if you read this in time: have a nice weekend

regards
Ton

39
Web Server - Ask For Help / jFiles : improving parsing speed?
« on: September 14, 2016, 12:22:18 AM »
Hi Bruce,

I'm using jFiles/nettalk webservices quite intensively and now I am into the speed "thingy". I have to deal with tremendous amounts of rather large requests and optimised my appliction and backend interaction as much as possible.

As jFiles ships with source I can look for improvements there too, (and you've probably guessed it :) so I did ... I've attached the modified 1.26 version, hope you will consider implementing these adjustments and suggestions below

some requests and/or suggestions (not applied to the attached source <g>):

1 - something I haven't tried, but only thought of: is it a good idea to create one toplevel object as a container for most of jsonclass properties in one one place? Then each created jsonclass object will need a reference to this toplevel object. But this would probably create less overhead than all the cloning and cascading (up/down) of the properties for each object. The objects themselves can be smaller, so construction them runs faster. Again I don't grasp the concept of jFiles completely and I'm only thinking out loud :-)

2 - the general loading/parsing of the json file contains quite a lot of calls to the .HandleChar procedure, but since version 1.25 (?) you've added a third parameter pPlace that is filled with a call to substr() for each call. I haven't changed these the attached version, but stripping them out improves the call to the LoadString method with another 4 to 5 percent speed improvement. Maybe passing the pToParse String by reference and only substr it when an error occurs (which should be rarely the case ...)?

3 - would you consider removing (or making it a configurable object property (with the downside of cascading ...)?) the calls to SELF.Trace in at least both the .FillStructure method and the .GetObject (when no object is found) method?. I prefer to ship all my code with debug option on, so these calls are effectively made in the production system and do slowdown the system (on the 5000 record example this saves me .3 secs per request)


Now for the applied improvement (see attached modified jFiles.clw/inc)

when loading json into a queue I noticed the fieldlabels are processed for each queue record/json object. I've created a "preprocessor" for this, with little or no overhead that reduces a total processing time of 5000 records from 2.24 secs to 1.95 secs (this is total "service time", so includes api validition, fetching against backend of 5000 records, and producing a 5000 record result queue and turning that into json too)

the following needs to be changed:

- in jfiles.inc : add the ColNameQType declaration
- in jfiles.inc : modify the prototype of the first Fillstructure item to : FillStructure                   procedure(*GROUP pGroup, <ColNameQType pColNameQ>),Long,Proc,VIRTUAL                         

- in files.clw : in the FillStructure (first occur.) method the following is added (see around line 1539):

      if ~OMITTED(pColNameQ)
        GET(pColNameQ, c)
        nIndex = SELF.Position(pColNameQ.label)
!?       self.trace(all(' ',indent) & 'Trying to fill ' & clip(pColNameQ.label) & ' nIndex = ' & nIndex  )               
      else
        PropertyName.SetValue(WHO(pGroup,c))
        self.AdjustFieldName(PropertyName,self.TagCase)
        nIndex = SELF.Position(PropertyName.GetValue())
!?       self.trace(all(' ',indent) & 'Trying to fill ' & clip(PropertyName.GetValue()) & ' nIndex = ' & nIndex  )
      end

- in jFiles.clw the method "JSONClass.load  Procedure(QUEUE pQueue)" is modified: this does the actual preprocessing and passes the local queue to the .FillStructure method


There is also another FillStructure method that's called with a queue as a parameter (second occurence), that I haven't touched (as I don't effectively use it), but that might also benefit from this!


Hope you have time to implement (at least some of it). Let me know if I can be of any help. I'm willing to invest some time in it too!

regards,
Ton

40
Hi Bruce,

(didn't have time to test this earlier ...)

first results are really positive. I now can control null/omitted as I like (I internally only use strings now for importing queues and set these to <255,255,255,255,255> as omitted/null values, works as a charm)

Also did a simple test with the last "record" elements where the <13,10> were append to the data and I think that bug is completely gone too!

Thanks a million, great support!

regards,
Ton

41
Hi Bruce,

I now see there there were two releases of jFiles that I didn't know of (I was on 1.22) :-)

1.24 you did change something that sounds like my problem, but it's not exactly the same I guess as It seems like it doesn't solve my scenario ...

1.23 you apparently now replace null values for empty strings and 0 for numeric fields (I can't find the change in the code though, but it's getting late here ...), but would you maybe, please consider making this an option for strings (the "null" is valuable to me ...). But maybe this is not necessary if you provide the "ClearMode" option (see my other post) and maybe simply not assign the null values to string (and thus leaving it to <255,255 ....> so I can test for it ...)

Regards,
Ton

42
Hi Bruce,

I - think - I've spotted a tricky error in the parsing of the json.

Whenever a literal (null, true, false) or numeric gets processed as the last element of an array IN HUMAN READABLE form there is a linebreak (<13,10>) right after this value (before the close "}" of the record structure is processed added to these literals or numbers.

Below I've pasted an example that shows this misbehaviour. the array "orderLines" contains three elements and all "ending" fields ("qAcceleratedType", "label1", "label1") have values that are directly followed by a linebreak. When you modify the "HandleChar" method of the JSONCLASS to include the trace like below you see what's going on quite easily:

JSONClass.HandleChar              PROCEDURE(STRING pChar)
  CODE
  if SELF.Stack.StringStarted OR SELF.Stack.NumericStarted OR SELF.Stack.BooleanStarted OR SELF.Stack.NullStarted
    IF not SELF.Stack.Buffer &= NULL
      SELF.Stack.Buffer.Append(pChar)
      self.trace('pChar : ' & pChar & ' val(pChar) : ' & val(pChar))
    end
  end

I think the solution is maybe to add the following to the loadstring method to only add linebreak when within string values:

    of '<10>' orof '<13>'
      if SELF.Stack.StringStarted
        SELF.HandleChar(pToParse[c])
      end!if


This is the json

{
   "orders_response": {
      "apiVersion": "1.0",
      "orderLines": [
         {
            "fkSetLineId": 12056305,
            "qAcceleratedType": 30
         }
         {
            "fkSetLineId": 12056307,
            "qAcceleratedType": 30,
            "label1": true
         }
         {
            "fkSetLineId": 12056307,
            "qAcceleratedType": 30,
            "label1": null
         }
      ]
   }
}

43
Hi Bruce,

>I might be tempted to argue that your API design needs work.

you could always become a diplomat if programming doesn't work out any more :-) And you're probably right too: there is lots of room for improvement in my design.and creating multiple methods partly solves the access rights things too. But (being stubborn as always):

- I really would like to keep all things in one place (easy maintainance)
- the access rights are already taken care of (and result in errors from the backend), (docs is a thingy though ...)
- when the clients only specify modified fields there are no "dirty values" coming in (suppose both client one and two read field1=0 and field2=0 and afterwards client one sets/returns field1=1 and client two sets field2=1 (and both leave out the other field) there are no side effects to both update actions . If client1 returns field1=1, field2=0 and client2 returns field1=0, field2=1 then it depends on who came first, what the final result will be ...
- saving bandwidth is really important too

I've thought about the "dud" values too, but I first thougth of walking through all fields of the queue/group structure and setting things to "null" explicitly somewhere afther the clear(gr) in the .load method, but your "AAAAAA" brought me to something better perhaps, maybe you could incorporate into jFiles (and xFiles too?):

introduce two new equates in jFiles.inc:

jF:ClearNormal EQUATE(0)
jF:ClearHigh EQUATE(1)

then add an extra property to the JSON object:

ClearMode    BYTE

and then modify the following in the .load method:

!---------------------------------------------------------------
! Load JSON Object to Queue
!---------------------------------------------------------------
JSONClass.load  Procedure(QUEUE pQueue)
json         &JSONClass
result       long
x            long
gr           &group
match        long
  CODE
  self.Action = jf:Load
  self.Using = jf:Queue
  self.Q &= pQueue
  self.CascadeUp()  ! copy current properties up the tree
  if self.FreeQueueBeforeLoad
    free(pQueue)
  end
  gr &= pQueue

  loop x = 1 to self.records()
    json &= self.Get(x)
    case self.ClearMode
    of jF:ClearNormal
      clear(gr)
    of jF:ClearHigh
      clear(gr, 1)
    end
    ....

I think this would get me going. I now only have to set the .ClearMode property and change all my Queue elements to at least 4 character strings and then I can hanlde omitted values and real "null" values too!  It's fast and doesn't break any existing code (as far as I can tell ....)

Maybe the ".CascadeDown" method also need the following line (so you only have to set this property on the topmost instantiation of the recursive objects ...):

SELF.ClearMode         = pParent.ClearMode

What do you think of this?

(if you don't want to incorporate it, I'll do it myself anyway :-) )

regards,
Ton

44
Hi Bruce,

I'm developing an api layer where several parties can make use of.

I've got "tables" with rather large record structures that are exposed in this method for update action. The clients are supposed to supply only fields that need updating (for several reasons: saving bandwidth, not overwriting "old" data, less programming effort on the client side and different clients "own" different part of the record structure (different access rights))

I the "Primeparameters" section the Queue (which is my "target" structure) nicely gets filled from the json (array) that clients supply and I'm processing it afterwards manually in the buildresults section/routine. But after the queue is filled there is no way for me to detect omitted/null values.

So I dived into jFiles and saw a great opportunity in the .FillStructure procedure (where nIndex = 0 for a field if it is omitted) where I could it most efficiently (IMHO) without reprocessing the original json later on.

I probably would tie an external queue to the jsonclass and add entries for each record/field tuple that is omitted so I could later easily test for presence of these omitted entries ...

but maybe things can be done much simpler/faster ...

45
Hi Bruce,

today I've studied the jfiles objects a little and decided that I could do what I want in the ".Fillstructure" procedure/method, so I tried to override the ".FillStructure" method of the JSONCLASS, by creating my own json class (inheriting from JSONCLASS).

This is not easily done I noticed: the class uses JSONCLASS and &JSONCLASS in lots of places (that all need to point to "my" class now) so I ended up overriding lots of methods (GetByName, GetObject, Get ...). This seems a little impractical

Is there an easy way to override this single method (.FillStructure) somehow, so I could modify it for my own needs? I try to keep away from modifying the base classes itself ..

(I'm thinking of providing the class with a reference to a Queue, in which I store alle ordinal Queue fields of the target Queue (that's used for loading the json) that are omitted (nIndex = 0 in .FillStructure) for each queue entry. Then adding a procedure .IsNull(QueuePosition, OrdinalFieldPosition) to the class and presto !)

regards,
Ton

Pages: 1 2 [3] 4 5