Another server question (mixing node.js and LC)

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
Hello,

I have another server question. I really like scripting with LC, because I can make improvements very quickly. This is important because of my very limited free time.

But, I want to be able to handle many many concurrent server requests, the way node.js does.

Would it work to have node take In a request, launch an LC cgi executable to process the request, set an event listener to wait for LC to send the results back to Node, then have node return the results to the user?

This is not unlike using Apache to launch LC CGI processes, but the asynchronous nature of node would, presumably, tie up fewer system resources and allow for larger concurrency. This could mean having a couple thousand LC processes running at any one time - would that be okay as long as the server had enough RAM?

In general, would this work for a system that hand to handle, say, 10,000 server requests per minute?

Sent from my iPhone
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
jonathandlynch wrote:

 > I have another server question. I really like scripting with LC,
 > because I can make improvements very quickly. This is important
 > because of my very limited free time.
 >
 > But, I want to be able to handle many many concurrent server requests,
 > the way node.js does.

Good timing.  Geoff Canyon and I have been corresponding about a related
matter, comparing performance of LC Server with PHP.

PHP7 is such a radical improvement over PHP5 that it's almost unfair to
compare it any scripting language now.  But it also prompts me to
wonder: is there anything in those PHP speed improvements which could be
applied to LC?


But that's for the future, and for CGI.  In the here-and-now, you're
exploring a different but very interesting area:

 > Would it work to have node take In a request, launch an LC cgi
 > executable to process the request, set an event listener to wait
 > for LC to send the results back to Node, then have node return
 > the results to the user?
 >
 > This is not unlike using Apache to launch LC CGI processes, but
 > the asynchronous nature of node would, presumably, tie up fewer
 > system resources and allow for larger concurrency. This could mean
 > having a couple thousand LC processes running at any one time - would
 > that be okay as long as the server had enough RAM?
 >
 > In general, would this work for a system that hand to handle, say,
 > 10,000 server requests per minute?

A minute's a long time.  That's only 167 connections per second.

Likely difficult for any CGI, and certainly for LC (see general
performance relative to PHP, and the 70+% of LC boot time spent
initializing fonts that are almost never used in CGIs - BZ# 14115).

But there are other ways beyond CGI.

A couple years ago Pierre Sahores and I traded notes here on this list
about tests run with LC socket servers.  There's a lot across multiple
threads, but this may be a good starting point:
http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html

One thing is clear:  if high concurrency is a requirement, use something
dedicated to manage comms between connected clients and a pool of workers.

My own tests were measuring lchttpd against Apache, a different model
but instructive here because it's still about socket comms.  What I
found was that an httpd written in LC was outmatched by Apache two-fold.
  But that also means that a quickly-thrown-together httpd script in LC
was about half as fast as the world's most popular httpd written in C by
hundreds of contributors specializing in that task.

So, promising for certain tasks. :)

The key with my modded fork of the old mchttpd stack was rewriting all
socket comms to use callbacks.  The original used callbacks only for
incoming POST, but I extended that to include all writes as well.

Applying this to your scenario:

     client      client      client
    --------    --------    --------
       \           |          /
        ........internet.......
         \         |       /
  |----------- HTTP SERVER -----------|
  |     /           |          \      |
  |  worker       worker      worker  |
  |-----------------------------------|


While LC could be used in the role of the HTTP SERVER, that would be
wasteful.  It's not an interesting job, and dedicated tools like Node.js
and NginX will outperform it many-fold.  Let the experts handle the
boring parts. :)

The value LC brings to the table is application-specific.  So we let a
dedicated tool broker comms between external clients and a pool of
workers, where the workers could be LC standalones.

That's where much of Pierre's experiments have focused, and where the
most interesting and productive use of LC lies in a scenario where load
requirements exceed practical limitations of LC as a CGI.

The boost goes beyond the RAM savings from having a separate LC instance
for each CGI request:  as a persistent process, it obviates the
font-loading and other init that take up so much time in an LC CGI.

As with the lchttpd experiments, using callbacks for all sockets comms
between the LC-based workers and the HTTP SERVER will be essential for
keep throughput optimal.


TL;DR: I think you're on the right track for a possible solution that
optimizes your development time without prohibitively impeding scalability.


The suitability of this comes down to:  what exactly does each
transaction do?

167 transactions/sec may not be much, or it might be a lot.

If a given transaction is fairly modest, I'd say it's probably worth the
time to put together a test system to try it out.

But if a transaction is CPU intensive, or heavily I/O bound, or
otherwise taking up a lot of time, the radical changes in PHP7 may make
it a better bet, esp. if run as FastCGI.

Can you tell us more about what a given transaction involves?

--
  Richard Gaskin
  Fourth World Systems
  Software Design and Development for the Desktop, Mobile, and the Web
  ____________________________________________________________________
  [hidden email]                http://www.FourthWorld.com

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
Thank you, Richard

A given transaction involves processing a user request, making two or three requests to the database, and returning around 500 kB to the user.

I certainly don’t need to load fonts in the LC process. Can that be turned off?

I like the idea of maintaining a queue of running LC processes and growing or shrinking it as needed based on request load.

How does the http server know which process to access?

I know that node.js has a pretty simple code for launching a CGI process and listening for a result. I don’t know how it would do that with an already-running process.

Sent from my iPhone

> On Feb 28, 2018, at 12:22 PM, Richard Gaskin via use-livecode <[hidden email]> wrote:
>
> jonathandlynch wrote:
>
> > I have another server question. I really like scripting with LC,
> > because I can make improvements very quickly. This is important
> > because of my very limited free time.
> >
> > But, I want to be able to handle many many concurrent server requests,
> > the way node.js does.
>
> Good timing.  Geoff Canyon and I have been corresponding about a related matter, comparing performance of LC Server with PHP.
>
> PHP7 is such a radical improvement over PHP5 that it's almost unfair to compare it any scripting language now.  But it also prompts me to wonder: is there anything in those PHP speed improvements which could be applied to LC?
>
>
> But that's for the future, and for CGI.  In the here-and-now, you're exploring a different but very interesting area:
>
> > Would it work to have node take In a request, launch an LC cgi
> > executable to process the request, set an event listener to wait
> > for LC to send the results back to Node, then have node return
> > the results to the user?
> >
> > This is not unlike using Apache to launch LC CGI processes, but
> > the asynchronous nature of node would, presumably, tie up fewer
> > system resources and allow for larger concurrency. This could mean
> > having a couple thousand LC processes running at any one time - would
> > that be okay as long as the server had enough RAM?
> >
> > In general, would this work for a system that hand to handle, say,
> > 10,000 server requests per minute?
>
> A minute's a long time.  That's only 167 connections per second.
>
> Likely difficult for any CGI, and certainly for LC (see general performance relative to PHP, and the 70+% of LC boot time spent initializing fonts that are almost never used in CGIs - BZ# 14115).
>
> But there are other ways beyond CGI.
>
> A couple years ago Pierre Sahores and I traded notes here on this list about tests run with LC socket servers.  There's a lot across multiple threads, but this may be a good starting point:
> http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
>
> One thing is clear:  if high concurrency is a requirement, use something dedicated to manage comms between connected clients and a pool of workers.
>
> My own tests were measuring lchttpd against Apache, a different model but instructive here because it's still about socket comms.  What I found was that an httpd written in LC was outmatched by Apache two-fold.  But that also means that a quickly-thrown-together httpd script in LC was about half as fast as the world's most popular httpd written in C by hundreds of contributors specializing in that task.
>
> So, promising for certain tasks. :)
>
> The key with my modded fork of the old mchttpd stack was rewriting all socket comms to use callbacks.  The original used callbacks only for incoming POST, but I extended that to include all writes as well.
>
> Applying this to your scenario:
>
>    client      client      client
>   --------    --------    --------
>      \           |          /
>       ........internet.......
>        \         |       /
> |----------- HTTP SERVER -----------|
> |     /           |          \      |
> |  worker       worker      worker  |
> |-----------------------------------|
>
>
> While LC could be used in the role of the HTTP SERVER, that would be wasteful.  It's not an interesting job, and dedicated tools like Node.js and NginX will outperform it many-fold.  Let the experts handle the boring parts. :)
>
> The value LC brings to the table is application-specific.  So we let a dedicated tool broker comms between external clients and a pool of workers, where the workers could be LC standalones.
>
> That's where much of Pierre's experiments have focused, and where the most interesting and productive use of LC lies in a scenario where load requirements exceed practical limitations of LC as a CGI.
>
> The boost goes beyond the RAM savings from having a separate LC instance for each CGI request:  as a persistent process, it obviates the font-loading and other init that take up so much time in an LC CGI.
>
> As with the lchttpd experiments, using callbacks for all sockets comms between the LC-based workers and the HTTP SERVER will be essential for keep throughput optimal.
>
>
> TL;DR: I think you're on the right track for a possible solution that optimizes your development time without prohibitively impeding scalability.
>
>
> The suitability of this comes down to:  what exactly does each transaction do?
>
> 167 transactions/sec may not be much, or it might be a lot.
>
> If a given transaction is fairly modest, I'd say it's probably worth the time to put together a test system to try it out.
>
> But if a transaction is CPU intensive, or heavily I/O bound, or otherwise taking up a lot of time, the radical changes in PHP7 may make it a better bet, esp. if run as FastCGI.
>
> Can you tell us more about what a given transaction involves?
>
> --
> Richard Gaskin
> Fourth World Systems
> Software Design and Development for the Desktop, Mobile, and the Web
> ____________________________________________________________________
> [hidden email]                http://www.FourthWorld.com
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
In reading about fastCGI and LC, it seems rather experimental. I am just wondering if replacing Apache with node.js as the http server would give us the necessary concurrency capacity for using LC server on a large scale.

Basically, I am soon going to start pitching augmented tours (idea suggested by guys at a business incubator) to tourism companies, using Augmented Earth, and I don’t want to have the server crash if a large number of people are using it all at once.

Sent from my iPhone

> On Feb 28, 2018, at 12:48 PM, [hidden email] wrote:
>
> Thank you, Richard
>
> A given transaction involves processing a user request, making two or three requests to the database, and returning around 500 kB to the user.
>
> I certainly don’t need to load fonts in the LC process. Can that be turned off?
>
> I like the idea of maintaining a queue of running LC processes and growing or shrinking it as needed based on request load.
>
> How does the http server know which process to access?
>
> I know that node.js has a pretty simple code for launching a CGI process and listening for a result. I don’t know how it would do that with an already-running process.
>
> Sent from my iPhone
>
>> On Feb 28, 2018, at 12:22 PM, Richard Gaskin via use-livecode <[hidden email]> wrote:
>>
>> jonathandlynch wrote:
>>
>>> I have another server question. I really like scripting with LC,
>>> because I can make improvements very quickly. This is important
>>> because of my very limited free time.
>>>
>>> But, I want to be able to handle many many concurrent server requests,
>>> the way node.js does.
>>
>> Good timing.  Geoff Canyon and I have been corresponding about a related matter, comparing performance of LC Server with PHP.
>>
>> PHP7 is such a radical improvement over PHP5 that it's almost unfair to compare it any scripting language now.  But it also prompts me to wonder: is there anything in those PHP speed improvements which could be applied to LC?
>>
>>
>> But that's for the future, and for CGI.  In the here-and-now, you're exploring a different but very interesting area:
>>
>>> Would it work to have node take In a request, launch an LC cgi
>>> executable to process the request, set an event listener to wait
>>> for LC to send the results back to Node, then have node return
>>> the results to the user?
>>>
>>> This is not unlike using Apache to launch LC CGI processes, but
>>> the asynchronous nature of node would, presumably, tie up fewer
>>> system resources and allow for larger concurrency. This could mean
>>> having a couple thousand LC processes running at any one time - would
>>> that be okay as long as the server had enough RAM?
>>>
>>> In general, would this work for a system that hand to handle, say,
>>> 10,000 server requests per minute?
>>
>> A minute's a long time.  That's only 167 connections per second.
>>
>> Likely difficult for any CGI, and certainly for LC (see general performance relative to PHP, and the 70+% of LC boot time spent initializing fonts that are almost never used in CGIs - BZ# 14115).
>>
>> But there are other ways beyond CGI.
>>
>> A couple years ago Pierre Sahores and I traded notes here on this list about tests run with LC socket servers.  There's a lot across multiple threads, but this may be a good starting point:
>> http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
>>
>> One thing is clear:  if high concurrency is a requirement, use something dedicated to manage comms between connected clients and a pool of workers.
>>
>> My own tests were measuring lchttpd against Apache, a different model but instructive here because it's still about socket comms.  What I found was that an httpd written in LC was outmatched by Apache two-fold.  But that also means that a quickly-thrown-together httpd script in LC was about half as fast as the world's most popular httpd written in C by hundreds of contributors specializing in that task.
>>
>> So, promising for certain tasks. :)
>>
>> The key with my modded fork of the old mchttpd stack was rewriting all socket comms to use callbacks.  The original used callbacks only for incoming POST, but I extended that to include all writes as well.
>>
>> Applying this to your scenario:
>>
>>   client      client      client
>>  --------    --------    --------
>>     \           |          /
>>      ........internet.......
>>       \         |       /
>> |----------- HTTP SERVER -----------|
>> |     /           |          \      |
>> |  worker       worker      worker  |
>> |-----------------------------------|
>>
>>
>> While LC could be used in the role of the HTTP SERVER, that would be wasteful.  It's not an interesting job, and dedicated tools like Node.js and NginX will outperform it many-fold.  Let the experts handle the boring parts. :)
>>
>> The value LC brings to the table is application-specific.  So we let a dedicated tool broker comms between external clients and a pool of workers, where the workers could be LC standalones.
>>
>> That's where much of Pierre's experiments have focused, and where the most interesting and productive use of LC lies in a scenario where load requirements exceed practical limitations of LC as a CGI.
>>
>> The boost goes beyond the RAM savings from having a separate LC instance for each CGI request:  as a persistent process, it obviates the font-loading and other init that take up so much time in an LC CGI.
>>
>> As with the lchttpd experiments, using callbacks for all sockets comms between the LC-based workers and the HTTP SERVER will be essential for keep throughput optimal.
>>
>>
>> TL;DR: I think you're on the right track for a possible solution that optimizes your development time without prohibitively impeding scalability.
>>
>>
>> The suitability of this comes down to:  what exactly does each transaction do?
>>
>> 167 transactions/sec may not be much, or it might be a lot.
>>
>> If a given transaction is fairly modest, I'd say it's probably worth the time to put together a test system to try it out.
>>
>> But if a transaction is CPU intensive, or heavily I/O bound, or otherwise taking up a lot of time, the radical changes in PHP7 may make it a better bet, esp. if run as FastCGI.
>>
>> Can you tell us more about what a given transaction involves?
>>
>> --
>> Richard Gaskin
>> Fourth World Systems
>> Software Design and Development for the Desktop, Mobile, and the Web
>> ____________________________________________________________________
>> [hidden email]                http://www.FourthWorld.com
>>
>> _______________________________________________
>> use-livecode mailing list
>> [hidden email]
>> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
One thing you might do if you were to decide to stick with apache would be
to make sure you use either the worker mpm or events mpm (sounds like
events would be the one you wanted for this) (read more on this page...
https://httpd.apache.org/docs/2.4/misc/perf-tuning.html ) to get better
performance.

Alternatively as Richard mentioned, there is nginx, which might be just
what the doctor ordered.  Basically, a request comes in, is handed off to
the your lc script, and when a response is ready, it handles it and sends
it back to the client, meanwhile still being able to listen for, and accept
new requests. At least this is what I get from my reading, some of which
are older postings. Sounds pretty much like what you are thinking of doing
with node.js.

I'm also wondering where a docker swarm might fit into your needs. multiple
containers with a custom nginx image that can run your scripts, with load
balancing and auto failover could be a great thing, and still be very
lightweight. (the nginx docker on alpine is amazingly tiny, lightweight)

I've no clue how performance and reliability might compare to node.js for
this.

On Wed, Feb 28, 2018 at 11:26 AM, Jonathan Lynch via use-livecode <
[hidden email]> wrote:

> In reading about fastCGI and LC, it seems rather experimental. I am just
> wondering if replacing Apache with node.js as the http server would give us
> the necessary concurrency capacity for using LC server on a large scale.
>
> Basically, I am soon going to start pitching augmented tours (idea
> suggested by guys at a business incubator) to tourism companies, using
> Augmented Earth, and I don’t want to have the server crash if a large
> number of people are using it all at once.
>
> Sent from my iPhone
>
> > On Feb 28, 2018, at 12:48 PM, [hidden email] wrote:
> >
> > Thank you, Richard
> >
> > A given transaction involves processing a user request, making two or
> three requests to the database, and returning around 500 kB to the user.
> >
> > I certainly don’t need to load fonts in the LC process. Can that be
> turned off?
> >
> > I like the idea of maintaining a queue of running LC processes and
> growing or shrinking it as needed based on request load.
> >
> > How does the http server know which process to access?
> >
> > I know that node.js has a pretty simple code for launching a CGI process
> and listening for a result. I don’t know how it would do that with an
> already-running process.
> >
> > Sent from my iPhone
> >
> >> On Feb 28, 2018, at 12:22 PM, Richard Gaskin via use-livecode <
> [hidden email]> wrote:
> >>
> >> jonathandlynch wrote:
> >>
> >>> I have another server question. I really like scripting with LC,
> >>> because I can make improvements very quickly. This is important
> >>> because of my very limited free time.
> >>>
> >>> But, I want to be able to handle many many concurrent server requests,
> >>> the way node.js does.
> >>
> >> Good timing.  Geoff Canyon and I have been corresponding about a
> related matter, comparing performance of LC Server with PHP.
> >>
> >> PHP7 is such a radical improvement over PHP5 that it's almost unfair to
> compare it any scripting language now.  But it also prompts me to wonder:
> is there anything in those PHP speed improvements which could be applied to
> LC?
> >>
> >>
> >> But that's for the future, and for CGI.  In the here-and-now, you're
> exploring a different but very interesting area:
> >>
> >>> Would it work to have node take In a request, launch an LC cgi
> >>> executable to process the request, set an event listener to wait
> >>> for LC to send the results back to Node, then have node return
> >>> the results to the user?
> >>>
> >>> This is not unlike using Apache to launch LC CGI processes, but
> >>> the asynchronous nature of node would, presumably, tie up fewer
> >>> system resources and allow for larger concurrency. This could mean
> >>> having a couple thousand LC processes running at any one time - would
> >>> that be okay as long as the server had enough RAM?
> >>>
> >>> In general, would this work for a system that hand to handle, say,
> >>> 10,000 server requests per minute?
> >>
> >> A minute's a long time.  That's only 167 connections per second.
> >>
> >> Likely difficult for any CGI, and certainly for LC (see general
> performance relative to PHP, and the 70+% of LC boot time spent
> initializing fonts that are almost never used in CGIs - BZ# 14115).
> >>
> >> But there are other ways beyond CGI.
> >>
> >> A couple years ago Pierre Sahores and I traded notes here on this list
> about tests run with LC socket servers.  There's a lot across multiple
> threads, but this may be a good starting point:
> >> http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
> >>
> >> One thing is clear:  if high concurrency is a requirement, use
> something dedicated to manage comms between connected clients and a pool of
> workers.
> >>
> >> My own tests were measuring lchttpd against Apache, a different model
> but instructive here because it's still about socket comms.  What I found
> was that an httpd written in LC was outmatched by Apache two-fold.  But
> that also means that a quickly-thrown-together httpd script in LC was about
> half as fast as the world's most popular httpd written in C by hundreds of
> contributors specializing in that task.
> >>
> >> So, promising for certain tasks. :)
> >>
> >> The key with my modded fork of the old mchttpd stack was rewriting all
> socket comms to use callbacks.  The original used callbacks only for
> incoming POST, but I extended that to include all writes as well.
> >>
> >> Applying this to your scenario:
> >>
> >>   client      client      client
> >>  --------    --------    --------
> >>     \           |          /
> >>      ........internet.......
> >>       \         |       /
> >> |----------- HTTP SERVER -----------|
> >> |     /           |          \      |
> >> |  worker       worker      worker  |
> >> |-----------------------------------|
> >>
> >>
> >> While LC could be used in the role of the HTTP SERVER, that would be
> wasteful.  It's not an interesting job, and dedicated tools like Node.js
> and NginX will outperform it many-fold.  Let the experts handle the boring
> parts. :)
> >>
> >> The value LC brings to the table is application-specific.  So we let a
> dedicated tool broker comms between external clients and a pool of workers,
> where the workers could be LC standalones.
> >>
> >> That's where much of Pierre's experiments have focused, and where the
> most interesting and productive use of LC lies in a scenario where load
> requirements exceed practical limitations of LC as a CGI.
> >>
> >> The boost goes beyond the RAM savings from having a separate LC
> instance for each CGI request:  as a persistent process, it obviates the
> font-loading and other init that take up so much time in an LC CGI.
> >>
> >> As with the lchttpd experiments, using callbacks for all sockets comms
> between the LC-based workers and the HTTP SERVER will be essential for keep
> throughput optimal.
> >>
> >>
> >> TL;DR: I think you're on the right track for a possible solution that
> optimizes your development time without prohibitively impeding scalability.
> >>
> >>
> >> The suitability of this comes down to:  what exactly does each
> transaction do?
> >>
> >> 167 transactions/sec may not be much, or it might be a lot.
> >>
> >> If a given transaction is fairly modest, I'd say it's probably worth
> the time to put together a test system to try it out.
> >>
> >> But if a transaction is CPU intensive, or heavily I/O bound, or
> otherwise taking up a lot of time, the radical changes in PHP7 may make it
> a better bet, esp. if run as FastCGI.
> >>
> >> Can you tell us more about what a given transaction involves?
> >>
> >> --
> >> Richard Gaskin
> >> Fourth World Systems
> >> Software Design and Development for the Desktop, Mobile, and the Web
> >> ____________________________________________________________________
> >> [hidden email]                http://www.FourthWorld.com
> >>
> >> _______________________________________________
> >> use-livecode mailing list
> >> [hidden email]
> >> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> >> http://lists.runrev.com/mailman/listinfo/use-livecode
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode
>
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
I think you might be right, Mike. I have been reading about benchmark tests between node, Apache, and ningx. Node does not seem to live up to the hype at all.

Sent from my iPhone

> On Feb 28, 2018, at 2:27 PM, Mike Bonner via use-livecode <[hidden email]> wrote:
>
> One thing you might do if you were to decide to stick with apache would be
> to make sure you use either the worker mpm or events mpm (sounds like
> events would be the one you wanted for this) (read more on this page...
> https://httpd.apache.org/docs/2.4/misc/perf-tuning.html ) to get better
> performance.
>
> Alternatively as Richard mentioned, there is nginx, which might be just
> what the doctor ordered.  Basically, a request comes in, is handed off to
> the your lc script, and when a response is ready, it handles it and sends
> it back to the client, meanwhile still being able to listen for, and accept
> new requests. At least this is what I get from my reading, some of which
> are older postings. Sounds pretty much like what you are thinking of doing
> with node.js.
>
> I'm also wondering where a docker swarm might fit into your needs. multiple
> containers with a custom nginx image that can run your scripts, with load
> balancing and auto failover could be a great thing, and still be very
> lightweight. (the nginx docker on alpine is amazingly tiny, lightweight)
>
> I've no clue how performance and reliability might compare to node.js for
> this.
>
> On Wed, Feb 28, 2018 at 11:26 AM, Jonathan Lynch via use-livecode <
> [hidden email]> wrote:
>
>> In reading about fastCGI and LC, it seems rather experimental. I am just
>> wondering if replacing Apache with node.js as the http server would give us
>> the necessary concurrency capacity for using LC server on a large scale.
>>
>> Basically, I am soon going to start pitching augmented tours (idea
>> suggested by guys at a business incubator) to tourism companies, using
>> Augmented Earth, and I don’t want to have the server crash if a large
>> number of people are using it all at once.
>>
>> Sent from my iPhone
>>
>>> On Feb 28, 2018, at 12:48 PM, [hidden email] wrote:
>>>
>>> Thank you, Richard
>>>
>>> A given transaction involves processing a user request, making two or
>> three requests to the database, and returning around 500 kB to the user.
>>>
>>> I certainly don’t need to load fonts in the LC process. Can that be
>> turned off?
>>>
>>> I like the idea of maintaining a queue of running LC processes and
>> growing or shrinking it as needed based on request load.
>>>
>>> How does the http server know which process to access?
>>>
>>> I know that node.js has a pretty simple code for launching a CGI process
>> and listening for a result. I don’t know how it would do that with an
>> already-running process.
>>>
>>> Sent from my iPhone
>>>
>>>> On Feb 28, 2018, at 12:22 PM, Richard Gaskin via use-livecode <
>> [hidden email]> wrote:
>>>>
>>>> jonathandlynch wrote:
>>>>
>>>>> I have another server question. I really like scripting with LC,
>>>>> because I can make improvements very quickly. This is important
>>>>> because of my very limited free time.
>>>>>
>>>>> But, I want to be able to handle many many concurrent server requests,
>>>>> the way node.js does.
>>>>
>>>> Good timing.  Geoff Canyon and I have been corresponding about a
>> related matter, comparing performance of LC Server with PHP.
>>>>
>>>> PHP7 is such a radical improvement over PHP5 that it's almost unfair to
>> compare it any scripting language now.  But it also prompts me to wonder:
>> is there anything in those PHP speed improvements which could be applied to
>> LC?
>>>>
>>>>
>>>> But that's for the future, and for CGI.  In the here-and-now, you're
>> exploring a different but very interesting area:
>>>>
>>>>> Would it work to have node take In a request, launch an LC cgi
>>>>> executable to process the request, set an event listener to wait
>>>>> for LC to send the results back to Node, then have node return
>>>>> the results to the user?
>>>>>
>>>>> This is not unlike using Apache to launch LC CGI processes, but
>>>>> the asynchronous nature of node would, presumably, tie up fewer
>>>>> system resources and allow for larger concurrency. This could mean
>>>>> having a couple thousand LC processes running at any one time - would
>>>>> that be okay as long as the server had enough RAM?
>>>>>
>>>>> In general, would this work for a system that hand to handle, say,
>>>>> 10,000 server requests per minute?
>>>>
>>>> A minute's a long time.  That's only 167 connections per second.
>>>>
>>>> Likely difficult for any CGI, and certainly for LC (see general
>> performance relative to PHP, and the 70+% of LC boot time spent
>> initializing fonts that are almost never used in CGIs - BZ# 14115).
>>>>
>>>> But there are other ways beyond CGI.
>>>>
>>>> A couple years ago Pierre Sahores and I traded notes here on this list
>> about tests run with LC socket servers.  There's a lot across multiple
>> threads, but this may be a good starting point:
>>>> http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
>>>>
>>>> One thing is clear:  if high concurrency is a requirement, use
>> something dedicated to manage comms between connected clients and a pool of
>> workers.
>>>>
>>>> My own tests were measuring lchttpd against Apache, a different model
>> but instructive here because it's still about socket comms.  What I found
>> was that an httpd written in LC was outmatched by Apache two-fold.  But
>> that also means that a quickly-thrown-together httpd script in LC was about
>> half as fast as the world's most popular httpd written in C by hundreds of
>> contributors specializing in that task.
>>>>
>>>> So, promising for certain tasks. :)
>>>>
>>>> The key with my modded fork of the old mchttpd stack was rewriting all
>> socket comms to use callbacks.  The original used callbacks only for
>> incoming POST, but I extended that to include all writes as well.
>>>>
>>>> Applying this to your scenario:
>>>>
>>>>  client      client      client
>>>> --------    --------    --------
>>>>    \           |          /
>>>>     ........internet.......
>>>>      \         |       /
>>>> |----------- HTTP SERVER -----------|
>>>> |     /           |          \      |
>>>> |  worker       worker      worker  |
>>>> |-----------------------------------|
>>>>
>>>>
>>>> While LC could be used in the role of the HTTP SERVER, that would be
>> wasteful.  It's not an interesting job, and dedicated tools like Node.js
>> and NginX will outperform it many-fold.  Let the experts handle the boring
>> parts. :)
>>>>
>>>> The value LC brings to the table is application-specific.  So we let a
>> dedicated tool broker comms between external clients and a pool of workers,
>> where the workers could be LC standalones.
>>>>
>>>> That's where much of Pierre's experiments have focused, and where the
>> most interesting and productive use of LC lies in a scenario where load
>> requirements exceed practical limitations of LC as a CGI.
>>>>
>>>> The boost goes beyond the RAM savings from having a separate LC
>> instance for each CGI request:  as a persistent process, it obviates the
>> font-loading and other init that take up so much time in an LC CGI.
>>>>
>>>> As with the lchttpd experiments, using callbacks for all sockets comms
>> between the LC-based workers and the HTTP SERVER will be essential for keep
>> throughput optimal.
>>>>
>>>>
>>>> TL;DR: I think you're on the right track for a possible solution that
>> optimizes your development time without prohibitively impeding scalability.
>>>>
>>>>
>>>> The suitability of this comes down to:  what exactly does each
>> transaction do?
>>>>
>>>> 167 transactions/sec may not be much, or it might be a lot.
>>>>
>>>> If a given transaction is fairly modest, I'd say it's probably worth
>> the time to put together a test system to try it out.
>>>>
>>>> But if a transaction is CPU intensive, or heavily I/O bound, or
>> otherwise taking up a lot of time, the radical changes in PHP7 may make it
>> a better bet, esp. if run as FastCGI.
>>>>
>>>> Can you tell us more about what a given transaction involves?
>>>>
>>>> --
>>>> Richard Gaskin
>>>> Fourth World Systems
>>>> Software Design and Development for the Desktop, Mobile, and the Web
>>>> ____________________________________________________________________
>>>> [hidden email]                http://www.FourthWorld.com
>>>>
>>>> _______________________________________________
>>>> use-livecode mailing list
>>>> [hidden email]
>>>> Please visit this url to subscribe, unsubscribe and manage your
>> subscription preferences:
>>>> http://lists.runrev.com/mailman/listinfo/use-livecode
>>
>> _______________________________________________
>> use-livecode mailing list
>> [hidden email]
>> Please visit this url to subscribe, unsubscribe and manage your
>> subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode
>>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
Is it possible to solve the C10k problem with simple CGI? LC has a relatively small footprint in RAM. If each LC process takes up 7 meg, then 10,000 processes would take 70 gig of ram. NginX can manage that  no problem on a dedicated server. Is there any reason why that would not work?

Sent from my iPhone

> On Feb 28, 2018, at 2:49 PM, [hidden email] wrote:
>
> I think you might be right, Mike. I have been reading about benchmark tests between node, Apache, and ningx. Node does not seem to live up to the hype at all.
>
> Sent from my iPhone
>
>> On Feb 28, 2018, at 2:27 PM, Mike Bonner via use-livecode <[hidden email]> wrote:
>>
>> One thing you might do if you were to decide to stick with apache would be
>> to make sure you use either the worker mpm or events mpm (sounds like
>> events would be the one you wanted for this) (read more on this page...
>> https://httpd.apache.org/docs/2.4/misc/perf-tuning.html ) to get better
>> performance.
>>
>> Alternatively as Richard mentioned, there is nginx, which might be just
>> what the doctor ordered.  Basically, a request comes in, is handed off to
>> the your lc script, and when a response is ready, it handles it and sends
>> it back to the client, meanwhile still being able to listen for, and accept
>> new requests. At least this is what I get from my reading, some of which
>> are older postings. Sounds pretty much like what you are thinking of doing
>> with node.js.
>>
>> I'm also wondering where a docker swarm might fit into your needs. multiple
>> containers with a custom nginx image that can run your scripts, with load
>> balancing and auto failover could be a great thing, and still be very
>> lightweight. (the nginx docker on alpine is amazingly tiny, lightweight)
>>
>> I've no clue how performance and reliability might compare to node.js for
>> this.
>>
>> On Wed, Feb 28, 2018 at 11:26 AM, Jonathan Lynch via use-livecode <
>> [hidden email]> wrote:
>>
>>> In reading about fastCGI and LC, it seems rather experimental. I am just
>>> wondering if replacing Apache with node.js as the http server would give us
>>> the necessary concurrency capacity for using LC server on a large scale.
>>>
>>> Basically, I am soon going to start pitching augmented tours (idea
>>> suggested by guys at a business incubator) to tourism companies, using
>>> Augmented Earth, and I don’t want to have the server crash if a large
>>> number of people are using it all at once.
>>>
>>> Sent from my iPhone
>>>
>>>> On Feb 28, 2018, at 12:48 PM, [hidden email] wrote:
>>>>
>>>> Thank you, Richard
>>>>
>>>> A given transaction involves processing a user request, making two or
>>> three requests to the database, and returning around 500 kB to the user.
>>>>
>>>> I certainly don’t need to load fonts in the LC process. Can that be
>>> turned off?
>>>>
>>>> I like the idea of maintaining a queue of running LC processes and
>>> growing or shrinking it as needed based on request load.
>>>>
>>>> How does the http server know which process to access?
>>>>
>>>> I know that node.js has a pretty simple code for launching a CGI process
>>> and listening for a result. I don’t know how it would do that with an
>>> already-running process.
>>>>
>>>> Sent from my iPhone
>>>>
>>>>> On Feb 28, 2018, at 12:22 PM, Richard Gaskin via use-livecode <
>>> [hidden email]> wrote:
>>>>>
>>>>> jonathandlynch wrote:
>>>>>
>>>>>> I have another server question. I really like scripting with LC,
>>>>>> because I can make improvements very quickly. This is important
>>>>>> because of my very limited free time.
>>>>>>
>>>>>> But, I want to be able to handle many many concurrent server requests,
>>>>>> the way node.js does.
>>>>>
>>>>> Good timing.  Geoff Canyon and I have been corresponding about a
>>> related matter, comparing performance of LC Server with PHP.
>>>>>
>>>>> PHP7 is such a radical improvement over PHP5 that it's almost unfair to
>>> compare it any scripting language now.  But it also prompts me to wonder:
>>> is there anything in those PHP speed improvements which could be applied to
>>> LC?
>>>>>
>>>>>
>>>>> But that's for the future, and for CGI.  In the here-and-now, you're
>>> exploring a different but very interesting area:
>>>>>
>>>>>> Would it work to have node take In a request, launch an LC cgi
>>>>>> executable to process the request, set an event listener to wait
>>>>>> for LC to send the results back to Node, then have node return
>>>>>> the results to the user?
>>>>>>
>>>>>> This is not unlike using Apache to launch LC CGI processes, but
>>>>>> the asynchronous nature of node would, presumably, tie up fewer
>>>>>> system resources and allow for larger concurrency. This could mean
>>>>>> having a couple thousand LC processes running at any one time - would
>>>>>> that be okay as long as the server had enough RAM?
>>>>>>
>>>>>> In general, would this work for a system that hand to handle, say,
>>>>>> 10,000 server requests per minute?
>>>>>
>>>>> A minute's a long time.  That's only 167 connections per second.
>>>>>
>>>>> Likely difficult for any CGI, and certainly for LC (see general
>>> performance relative to PHP, and the 70+% of LC boot time spent
>>> initializing fonts that are almost never used in CGIs - BZ# 14115).
>>>>>
>>>>> But there are other ways beyond CGI.
>>>>>
>>>>> A couple years ago Pierre Sahores and I traded notes here on this list
>>> about tests run with LC socket servers.  There's a lot across multiple
>>> threads, but this may be a good starting point:
>>>>> http://lists.runrev.com/pipermail/use-livecode/2016-March/225068.html
>>>>>
>>>>> One thing is clear:  if high concurrency is a requirement, use
>>> something dedicated to manage comms between connected clients and a pool of
>>> workers.
>>>>>
>>>>> My own tests were measuring lchttpd against Apache, a different model
>>> but instructive here because it's still about socket comms.  What I found
>>> was that an httpd written in LC was outmatched by Apache two-fold.  But
>>> that also means that a quickly-thrown-together httpd script in LC was about
>>> half as fast as the world's most popular httpd written in C by hundreds of
>>> contributors specializing in that task.
>>>>>
>>>>> So, promising for certain tasks. :)
>>>>>
>>>>> The key with my modded fork of the old mchttpd stack was rewriting all
>>> socket comms to use callbacks.  The original used callbacks only for
>>> incoming POST, but I extended that to include all writes as well.
>>>>>
>>>>> Applying this to your scenario:
>>>>>
>>>>> client      client      client
>>>>> --------    --------    --------
>>>>>   \           |          /
>>>>>    ........internet.......
>>>>>     \         |       /
>>>>> |----------- HTTP SERVER -----------|
>>>>> |     /           |          \      |
>>>>> |  worker       worker      worker  |
>>>>> |-----------------------------------|
>>>>>
>>>>>
>>>>> While LC could be used in the role of the HTTP SERVER, that would be
>>> wasteful.  It's not an interesting job, and dedicated tools like Node.js
>>> and NginX will outperform it many-fold.  Let the experts handle the boring
>>> parts. :)
>>>>>
>>>>> The value LC brings to the table is application-specific.  So we let a
>>> dedicated tool broker comms between external clients and a pool of workers,
>>> where the workers could be LC standalones.
>>>>>
>>>>> That's where much of Pierre's experiments have focused, and where the
>>> most interesting and productive use of LC lies in a scenario where load
>>> requirements exceed practical limitations of LC as a CGI.
>>>>>
>>>>> The boost goes beyond the RAM savings from having a separate LC
>>> instance for each CGI request:  as a persistent process, it obviates the
>>> font-loading and other init that take up so much time in an LC CGI.
>>>>>
>>>>> As with the lchttpd experiments, using callbacks for all sockets comms
>>> between the LC-based workers and the HTTP SERVER will be essential for keep
>>> throughput optimal.
>>>>>
>>>>>
>>>>> TL;DR: I think you're on the right track for a possible solution that
>>> optimizes your development time without prohibitively impeding scalability.
>>>>>
>>>>>
>>>>> The suitability of this comes down to:  what exactly does each
>>> transaction do?
>>>>>
>>>>> 167 transactions/sec may not be much, or it might be a lot.
>>>>>
>>>>> If a given transaction is fairly modest, I'd say it's probably worth
>>> the time to put together a test system to try it out.
>>>>>
>>>>> But if a transaction is CPU intensive, or heavily I/O bound, or
>>> otherwise taking up a lot of time, the radical changes in PHP7 may make it
>>> a better bet, esp. if run as FastCGI.
>>>>>
>>>>> Can you tell us more about what a given transaction involves?
>>>>>
>>>>> --
>>>>> Richard Gaskin
>>>>> Fourth World Systems
>>>>> Software Design and Development for the Desktop, Mobile, and the Web
>>>>> ____________________________________________________________________
>>>>> [hidden email]                http://www.FourthWorld.com
>>>>>
>>>>> _______________________________________________
>>>>> use-livecode mailing list
>>>>> [hidden email]
>>>>> Please visit this url to subscribe, unsubscribe and manage your
>>> subscription preferences:
>>>>> http://lists.runrev.com/mailman/listinfo/use-livecode
>>>
>>> _______________________________________________
>>> use-livecode mailing list
>>> [hidden email]
>>> Please visit this url to subscribe, unsubscribe and manage your
>>> subscription preferences:
>>> http://lists.runrev.com/mailman/listinfo/use-livecode
>>>
>> _______________________________________________
>> use-livecode mailing list
>> [hidden email]
>> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
In reply to this post by J. Landman Gay via use-livecode
jonathandlynch wrote:

 > I certainly don’t need to load fonts in the LC process.

Most people doing server work don't.  It's nice that we now have
graphics capabilities in Server, and I can imagine some CGIs that maybe
generate postcard or other output where fonts would be needed.  But
probably not many.


 > Can that be turned off?

Not yet.  I have a request for a "-f" option to bypass that:
http://quality.livecode.com/show_bug.cgi?id=14115

If we could get buy-in from the team to allow this to be added, given
that a command line flag is by far the simplest of the remedies
discussed I would imagine we may be able to find community resources to
implement it.



 > I like the idea of maintaining a queue of running LC processes and
 > growing or shrinking it as needed based on request load.
 >
 > How does the http server know which process to access?

There are various queuing methods, the simplest being a round-robin,
where a counter keeps track of the last worked used and each request
moves on to the next one.

Comms between HTTP server and workers also happen via sockets, on
internal ports.

The mechanics will vary from HTTP server to HTTP server, but the basic
setup seems pretty common.

--
  Richard Gaskin
  Fourth World Systems


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
On Thu, March 1, 2018 5:38 pm, Richard Gaskin via use-livecode wrote:
> It's nice that we now have
> graphics capabilities in Server,

Is there any doc on this somewhere ?
And is this feature already available on the LC version of on-rev accounts ?

Thanks,
jbv



_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
jbv wrote:

 > On Thu, March 1, 2018 5:38 pm, Richard Gaskin via use-livecode wrote:
 >> It's nice that we now have
 >> graphics capabilities in Server,
 >
 > Is there any doc on this somewhere ?

I think it was in the Release Notes for whatever version it was enabled
in (v7?), but I haven't checked to see if it's in the LS Server Guide
included with the download.

I haven't needed it myself, but my recollection is it's pretty
straightforward:  use the "export snapshot from <obj>" command to
produce an image of whatever you can put on a card.

This isn't new on the desktop of course, but earlier version of LC
Server didn't include the graphics subsystem.


 > And is this feature already available on the LC version of on-rev
 > accounts ?

Hard to say. I have an on-rev account, but haven't set it up.  I would
imagine that the mother ship is using the latest Stable build, no?

IIRC this has been around since at least v7, so the much-faster v8 and
v9 engines should have it too.

--
  Richard Gaskin
  Fourth World Systems
  Software Design and Development for the Desktop, Mobile, and the Web
  ____________________________________________________________________
  [hidden email]                http://www.FourthWorld.com

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
In reply to this post by J. Landman Gay via use-livecode
If you just need community, should be pretty easy to compile a copy without
that feature. I have not looked at the source though.

On Thu, Mar 1, 2018 at 10:38 AM Richard Gaskin via use-livecode <
[hidden email]> wrote:

>
>  > Can that be turned off?
>
> Not yet.  I have a request for a "-f" option to bypass that:
> http://quality.livecode.com/show_bug.cgi?id=14115
>
> If we could get buy-in from the team to allow this to be added, given
> that a command line flag is by far the simplest of the remedies
> discussed I would imagine we may be able to find community resources to
> implement it.
>
>
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
In reply to this post by J. Landman Gay via use-livecode
On 01/03/2018 18:48, Richard Gaskin via use-livecode wrote:

> jbv wrote:
>
> > And is this feature already available on the LC version of on-rev
> > accounts ?
>
> Hard to say. I have an on-rev account, but haven't set it up.  I would
> imagine that the mother ship is using the latest Stable build, no?
>
Yeah, right :-(
The default on on-rev (at least, on sage) is 7.1 !!

You can, I believe, request any particular version to be enabled per-domain.
Or you can specify a specific version (but I couldn't find a way to
predict which versions might be available).

Or, my choice, just give up on on-rev and use hostM (uses latest stable
release by default, and gives you a simple way to specify which major
release you would prefer to use), or Dreamhost (can't remember how they
did it, but I remember it worked OK).

-- Alex.

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Factoring over Scaling (was: Another server question (mixing node.js and LC))

J. Landman Gay via use-livecode
In reply to this post by J. Landman Gay via use-livecode
It's easy to get excited about C10k problems, and I got caught up in it
myself.  Geeks love this stuff.  It's hard to resist.

But once the coffee wore off, I changed hats and reconsidered this
problem from the standpoint not of an implementer, but a business owner.

Here's the core of the business need, summarized from two posts:

jonathandlynch wrote:
 > A given transaction involves processing a user request, making two
 > or three requests to the database, and returning around 500 kB to
 > the user.
...
 > Basically, I am soon going to start pitching augmented tours (idea
 > suggested by guys at a business incubator) to tourism companies, using
 > Augmented Earth, and I don’t want to have the server crash if a large
 > number of people are using it all at once.

Questions to consider, not for us but for your business planning:

- How many users will be an achievable maximum?

- How many users do you have today?

- What is your attrition rate?

- How long will it take you to get from your current
   user base to that maximum?

- What marketing plan will be needed to acquire those
   new customers?

- How much will that marketing plan cost to execute
   this year, next year, and the year after?

And Guy Kawasaki's favorite question:

- How will you be able to fulfill that marketing plan if you spend
   all your money on infrastructure development and provisioning?

:)

Grab your favorite after-dinner beverage, settle into your comfy chair,
and enjoy this talk by Guy, esp. Mistake #2:

   Guy Kawasaki: The Top 10 Mistakes of Entrepreneurs
   See Mistake #2: Scaling too soon (@10:57):
   https://www.youtube.com/watch?v=HHjgK6p4nrw


What we all want is one system that will handle anything we throw at it.

But what we truly need is just any system that will handle the customer
load we have today, with enough unused capacity for near-term growth.

When we run into capacity limits we have the most enviable business
problem:  too many customers. :)

That problem is self-correcting in software, unlike other forms of
manufacturing that have a cost of physical goods per unit sold.  We have
no supply chain, no fabrication, no inventory warehouse.  In software,
the only raw materials needed are bandwidth and CPU time, both of which
are far easier to acquire than customers.

Ultimately every system will run into capacity constraints. If you get
as big as Google, you'll eventually outgrow literally every existing
system on earth and even need to invent your own file system.  Most of
us don't get that big.  And on one starts that big.


If you find yourself with that most enviable of business problems, you
can rest easy because:

- You're not the first person to need scaling.

- At that point you have income to invest in scaling.


Harder than scaling is launching, with marketing a close second.  And
unless both of those happen, and are done with excellence, any
investment in scaling won't matter.

So with all that in mind, I would prioritize time-to-market first,
leaving as much time and money as you can for marketing.

To make the most of development time, use what you know.

It's not necessary to have large-scale capacity at the outset.  You just
want to make sure you don't make future scaling efforts harder than they
need to be.

For where you are at the moment, factoring may be more valuable than
scaling.  Use what you know and enjoy, and just make sure that your
system is set up with each element as discrete as it can practically be:
  client, server logic, server storage.  Separation of concerns, as they
say.

If you set those up with well-defined APIs between them, you can change
out any one of them without affecting the other two.

Then you can turn your attention to the harder work, the marketing plan.
  And when that pays off you'll be able to expand system components as
you need to.

And with any luck, you might even get so big that you'll need to invent
your own file system too.  I hope you do.  But along the way you'll be
able to consider scaling horizontally, or vertically, or both, as you
learn more about usage patterns as the actual needs become evident.

TL;DR: Relax about scaling, go cut some code. :)

--
  Richard Gaskin
  Fourth World Systems
  Software Design and Development for the Desktop, Mobile, and the Web
  ____________________________________________________________________
  [hidden email]                http://www.FourthWorld.com

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Another server question (mixing node.js and LC)

J. Landman Gay via use-livecode
In reply to this post by J. Landman Gay via use-livecode
Indeed, on sage the engine version is 7.1
I did a few quick tests last evening and things like "create btn"
or "export snapshot" work. I need to do more tests, but so far it's
a nice surprise.

jbv


On Thu, March 1, 2018 11:33 pm, Alex Tweedly via use-livecode wrote:

> On 01/03/2018 18:48, Richard Gaskin via use-livecode wrote:
>
>
>> jbv wrote:
>>
>>> And is this feature already available on the LC version of on-rev
>>> accounts ?
>>
>> Hard to say. I have an on-rev account, but haven't set it up.  I would
>> imagine that the mother ship is using the latest Stable build, no?
>>
> Yeah, right :-(
> The default on on-rev (at least, on sage) is 7.1 !!
>
>
> You can, I believe, request any particular version to be enabled
> per-domain. Or you can specify a specific version (but I couldn't find a
> way to predict which versions might be available).
>
> Or, my choice, just give up on on-rev and use hostM (uses latest stable
> release by default, and gives you a simple way to specify which major
> release you would prefer to use), or Dreamhost (can't remember how they
> did it, but I remember it worked OK).
>
> -- Alex.
>
>
> _______________________________________________
> use-livecode mailing list [hidden email] Please visit this
> url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode



_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Factoring over Scaling (was: Another server question (mixing node.js and LC))

J. Landman Gay via use-livecode
In reply to this post by J. Landman Gay via use-livecode
The sum total of any scaling I have done is to finally get my company to adopt my application in it's workflow. :-)

Bob S


> On Mar 1, 2018, at 22:13 , Richard Gaskin via use-livecode <[hidden email]> wrote:
>
> TL;DR: Relax about scaling, go cut some code. :)
>
> --
> Richard Gaskin


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode