Synchronisation of sound and vision

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

Synchronisation of sound and vision

Paul Dupuis via use-livecode
Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.

I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?

TIA

Graham
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:

The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.


Regards
Tore Nilsen


> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]>:
>
> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>
> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>
> TIA
>
> Graham
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).

Anyway thanks for the hint.

Graham

> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]> wrote:
>
> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>
> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>
>
> Regards
> Tore Nilsen
>
>
>> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]>:
>>
>> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>>
>> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>>
>> TIA
>>
>> Graham
>> _______________________________________________


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
It shouldn't be that much work (!? he said, in the comfort of knowing he
won't be doing it :-), at least for lines. Individual words could be too
hard.

Write a little app, so you can listen to the recording and click a
button at the start (or end?) of each line, and just keep track of the
times vs lines that way. Add in the ability to take up from an existing
position, and with a little bit of manual editing you should be nearly
there.

Of course, if you (or your helpers) are making the recordings, then you
can capture the button clicks at the same time as the recording is being
made.

Alex.

On 12/02/2020 12:28, Graham Samuel via use-livecode wrote:

> Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
>
> Anyway thanks for the hint.
>
> Graham
>
>> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]> wrote:
>>
>> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>>
>> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>>
>>
>> Regards
>> Tore Nilsen
>>
>>
>>> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]>:
>>>
>>> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>>>
>>> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>>>
>>> TIA
>>>
>>> Graham
>>> _______________________________________________
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
In reply to this post by Paul Dupuis via use-livecode
Yes, you have to manually set the callbacks. I would opt for lines rather than words. You get the callback points by getting the currentTime property from the player. If you start at the beginning you can set the first item of the first line of the callbacks to: 0. Then you can set a callback for each line in the poem by pausing the player after each line and get the currentTime of the player. This could be semi automated with a script that is triggered each time you pause the player, I guess.

Regards
Tore
 

> 12. feb. 2020 kl. 13:28 skrev Graham Samuel via use-livecode <[hidden email]>:
>
> Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
>
> Anyway thanks for the hint.
>
> Graham
>
>> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]> wrote:
>>
>> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>>
>> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>>
>>
>> Regards
>> Tore Nilsen
>>
>>
>>> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]>:
>>>
>>> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>>>
>>> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>>>
>>> TIA
>>>
>>> Graham
>>> _______________________________________________
>
>
> _______________________________________________
> use-livecode mailing list
> [hidden email] <mailto:[hidden email]>
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode <http://lists.runrev.com/mailman/listinfo/use-livecode>
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
In reply to this post by Paul Dupuis via use-livecode
Graham,

Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.


put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds

What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:

- Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.

- Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:

set the startTime of player “foo” to 444
set the endTime of player “foo” to 999
set the currentTime of player “foo” to the startTime of player “foo”
set the playerSelection of player “foo” to true
start player “foo"
- Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.

Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.

Hope this helps.

Devin


On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:

Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).

Anyway thanks for the hint.

Graham

On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:

You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:

The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.


Regards
Tore Nilsen


12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]>>:

Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.

I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?

TIA

Graham
_______________________________________________


_______________________________________________
use-livecode mailing list
[hidden email]<mailto:[hidden email]>
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode

Devin Asay
Director
Office of Digital Humanities
Brigham Young University

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.

You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.

Regards
Tore

> 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]>:
>
> Graham,
>
> Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.
>
>
> put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds
>
> What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:
>
> - Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.
>
> - Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:
>
> set the startTime of player “foo” to 444
> set the endTime of player “foo” to 999
> set the currentTime of player “foo” to the startTime of player “foo”
> set the playerSelection of player “foo” to true
> start player “foo"
> - Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.
>
> Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.
>
> Hope this helps.
>
> Devin
>
>
> On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>
> Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
>
> Anyway thanks for the hint.
>
> Graham
>
> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>
> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>
> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>
>
> Regards
> Tore Nilsen
>
>
> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]>>:
>
> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>
> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>
> TIA
>
> Graham
> _______________________________________________
>
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]<mailto:[hidden email]>
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode
>
> Devin Asay
> Director
> Office of Digital Humanities
> Brigham Young University
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Thanks Tore, Devin, Peter and Alex! There is a lot to chew on here. I do in fact have one file per poem - the user of the program will see each poem as different object, as it were, so there would be no advantage to combining them. I will try to do some experiments shortly. Doubtless after that there will be more questions.

The issue of user platform preferences (desktop or app etc) which is discussed by Peter must be a universal one. I have previously experienced the gotcha of school labs not wanting to install applications. But I am getting far ahead of myself, since there are so many other issues to consider before i get near to making a proper platform decision.

Graham

> On 12 Feb 2020, at 17:55, Tore Nilsen via use-livecode <[hidden email]> wrote:
>
> Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.
>
> You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.
>
> Regards
> Tore
>
>> 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]>:
>>
>> Graham,
>>
>> Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.
>>
>>
>> put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds
>>
>> What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:
>>
>> - Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.
>>
>> - Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:
>>
>> set the startTime of player “foo” to 444
>> set the endTime of player “foo” to 999
>> set the currentTime of player “foo” to the startTime of player “foo”
>> set the playerSelection of player “foo” to true
>> start player “foo"
>> - Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.
>>
>> Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.
>>
>> Hope this helps.
>>
>> Devin
>>
>>
>> On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>>
>> Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
>>
>> Anyway thanks for the hint.
>>
>> Graham
>>
>> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>>
>> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>>
>> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>>
>>
>> Regards
>> Tore Nilsen
>>
>>
>> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]>>:
>>
>> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>>
>> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>>
>> TIA
>>
>> Graham
>> _______________________________________________
>>
>>
>> _______________________________________________
>> use-livecode mailing list
>> [hidden email]<mailto:[hidden email]>
>> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode
>>
>> Devin Asay
>> Director
>> Office of Digital Humanities
>> Brigham Young University
>>
>> _______________________________________________
>> use-livecode mailing list
>> [hidden email]
>> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode
>
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
In reply to this post by Paul Dupuis via use-livecode
Tore,

I would agree if callbacks were 100% reliable. I have tried them in the past and found that in some cases they were missed. I never had any trouble when using time indices. But I should say that I haven’t needed to do this for several years, and the callbacks in the new player object might be completely reliable.

In other ways creating time indices makes your application more flexible, however. It’s dead simple, for instance, to set up an application where you can click on a line of text and play just that line. Set the startTime, set the endTime, set the playSelection to true, start playing. Done. That would be a little more challenging if all you had was callbacks.

One of the great things about LiveCode is that there is almost always more than one way to do what you want.

Regards,

Devin


On Feb 12, 2020, at 9:55 AM, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:

Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.

You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.

Regards
Tore

12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]<mailto:[hidden email]>>:

Graham,

Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.


put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds

What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:

- Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.

- Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:

set the startTime of player “foo” to 444
set the endTime of player “foo” to 999
set the currentTime of player “foo” to the startTime of player “foo”
set the playerSelection of player “foo” to true
start player “foo"
- Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.

Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.

Hope this helps.

Devin


On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>> wrote:

Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).

Anyway thanks for the hint.

Graham

On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>> wrote:

You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:

The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.


Regards
Tore Nilsen


12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>>:

Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.

I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?

TIA

Graham


Devin Asay
Director
Office of Digital Humanities
Brigham Young University

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Devin,
I haven’t used callbacks much, and so far I haven’t run in to any problems. If missing callbacks is still an issue, then I agree with you that setting startTime and endTime is the best option. I use this method in a small application I have made for myself where I write comments to audio files handed in by my English students. They can then control playback of the segments I have commented on by clicking links in the field that shows the comments. The lack of audio recording capability on Mac has forced me to use written feedback where I otherwise would have preferred using two players and audio feedback.

Regards
Tore

> 12. feb. 2020 kl. 19:57 skrev Devin Asay via use-livecode <[hidden email]>:
>
> Tore,
>
> I would agree if callbacks were 100% reliable. I have tried them in the past and found that in some cases they were missed. I never had any trouble when using time indices. But I should say that I haven’t needed to do this for several years, and the callbacks in the new player object might be completely reliable.
>
> In other ways creating time indices makes your application more flexible, however. It’s dead simple, for instance, to set up an application where you can click on a line of text and play just that line. Set the startTime, set the endTime, set the playSelection to true, start playing. Done. That would be a little more challenging if all you had was callbacks.
>
> One of the great things about LiveCode is that there is almost always more than one way to do what you want.
>
> Regards,
>
> Devin
>
>
> On Feb 12, 2020, at 9:55 AM, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>
> Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.
>
> You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.
>
> Regards
> Tore
>
> 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]<mailto:[hidden email]>>:
>
> Graham,
>
> Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.
>
>
> put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds
>
> What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:
>
> - Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.
>
> - Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:
>
> set the startTime of player “foo” to 444
> set the endTime of player “foo” to 999
> set the currentTime of player “foo” to the startTime of player “foo”
> set the playerSelection of player “foo” to true
> start player “foo"
> - Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.
>
> Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.
>
> Hope this helps.
>
> Devin
>
>
> On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>> wrote:
>
> Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
>
> Anyway thanks for the hint.
>
> Graham
>
> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>> wrote:
>
> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>
> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>
>
> Regards
> Tore Nilsen
>
>
> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>>:
>
> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>
> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>
> TIA
>
> Graham
>
>
> Devin Asay
> Director
> Office of Digital Humanities
> Brigham Young University
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Tore,

You can do audio recording on Mac now using the mergMicrophone library. It works great, and I believe is available in every edition of LC, including Community.

Devin

> On Feb 12, 2020, at 12:11 PM, Tore Nilsen via use-livecode <[hidden email]> wrote:
>
> Devin,
> I haven’t used callbacks much, and so far I haven’t run in to any problems. If missing callbacks is still an issue, then I agree with you that setting startTime and endTime is the best option. I use this method in a small application I have made for myself where I write comments to audio files handed in by my English students. They can then control playback of the segments I have commented on by clicking links in the field that shows the comments. The lack of audio recording capability on Mac has forced me to use written feedback where I otherwise would have preferred using two players and audio feedback.
>
> Regards
> Tore
>
>> 12. feb. 2020 kl. 19:57 skrev Devin Asay via use-livecode <[hidden email]>:
>>
>> Tore,
>>
>> I would agree if callbacks were 100% reliable. I have tried them in the past and found that in some cases they were missed. I never had any trouble when using time indices. But I should say that I haven’t needed to do this for several years, and the callbacks in the new player object might be completely reliable.
>>
>> In other ways creating time indices makes your application more flexible, however. It’s dead simple, for instance, to set up an application where you can click on a line of text and play just that line. Set the startTime, set the endTime, set the playSelection to true, start playing. Done. That would be a little more challenging if all you had was callbacks.
>>
>> One of the great things about LiveCode is that there is almost always more than one way to do what you want.
>>
>> Regards,
>>
>> Devin
>>
>>
>> On Feb 12, 2020, at 9:55 AM, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>>
>> Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.
>>
>> You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.
>>
>> Regards
>> Tore
>>
>> 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]<mailto:[hidden email]>>:
>>
>> Graham,
>>
>> Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.
>>
>>
>> put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds
>>
>> What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:
>>
>> - Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.
>>
>> - Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:
>>
>> set the startTime of player “foo” to 444
>> set the endTime of player “foo” to 999
>> set the currentTime of player “foo” to the startTime of player “foo”
>> set the playerSelection of player “foo” to true
>> start player “foo"
>> - Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.
>>
>> Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.
>>
>> Hope this helps.
>>
>> Devin

Devin Asay
Director
Office of Digital Humanities
Brigham Young University

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
I wasn’t aware of this, sounds great! (Pun intended) I will have to go back to my application and experiment a bit before the next batch of student recordings lands on my desktop. (You know, pun…)

Tore

> 12. feb. 2020 kl. 21:47 skrev Devin Asay via use-livecode <[hidden email]>:
>
> Tore,
>
> You can do audio recording on Mac now using the mergMicrophone library. It works great, and I believe is available in every edition of LC, including Community.
>
> Devin
>
>> On Feb 12, 2020, at 12:11 PM, Tore Nilsen via use-livecode <[hidden email]> wrote:
>>
>> Devin,
>> I haven’t used callbacks much, and so far I haven’t run in to any problems. If missing callbacks is still an issue, then I agree with you that setting startTime and endTime is the best option. I use this method in a small application I have made for myself where I write comments to audio files handed in by my English students. They can then control playback of the segments I have commented on by clicking links in the field that shows the comments. The lack of audio recording capability on Mac has forced me to use written feedback where I otherwise would have preferred using two players and audio feedback.
>>
>> Regards
>> Tore
>>
>>> 12. feb. 2020 kl. 19:57 skrev Devin Asay via use-livecode <[hidden email]>:
>>>
>>> Tore,
>>>
>>> I would agree if callbacks were 100% reliable. I have tried them in the past and found that in some cases they were missed. I never had any trouble when using time indices. But I should say that I haven’t needed to do this for several years, and the callbacks in the new player object might be completely reliable.
>>>
>>> In other ways creating time indices makes your application more flexible, however. It’s dead simple, for instance, to set up an application where you can click on a line of text and play just that line. Set the startTime, set the endTime, set the playSelection to true, start playing. Done. That would be a little more challenging if all you had was callbacks.
>>>
>>> One of the great things about LiveCode is that there is almost always more than one way to do what you want.
>>>
>>> Regards,
>>>
>>> Devin
>>>
>>>
>>> On Feb 12, 2020, at 9:55 AM, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>>>
>>> Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.
>>>
>>> You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.
>>>
>>> Regards
>>> Tore
>>>
>>> 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]<mailto:[hidden email]>>:
>>>
>>> Graham,
>>>
>>> Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.
>>>
>>>
>>> put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds
>>>
>>> What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:
>>>
>>> - Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.
>>>
>>> - Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:
>>>
>>> set the startTime of player “foo” to 444
>>> set the endTime of player “foo” to 999
>>> set the currentTime of player “foo” to the startTime of player “foo”
>>> set the playerSelection of player “foo” to true
>>> start player “foo"
>>> - Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.
>>>
>>> Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.
>>>
>>> Hope this helps.
>>>
>>> Devin
>
> Devin Asay
> Director
> Office of Digital Humanities
> Brigham Young University
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
In reply to this post by Paul Dupuis via use-livecode
Callbacks are the way to go, but note that LC's callbacks won't work on
Linux.

Because there's no functioning LC player object for Linux at all.

--
  Richard Gaskin
  Fourth World Systems
  Software Design and Development for the Desktop, Mobile, and the Web
  ____________________________________________________________________
  [hidden email]                http://www.FourthWorld.com

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
I worked on a similar project. I ended up splitting the audio into smaller sub clips and triggered each to play in turn. callbacks were a pain in the b

Sean Cole
Pi Digital Productions Ltd
eMail Ts & Cs


> On 12 Feb 2020, at 22:55, Richard Gaskin via use-livecode <[hidden email]> wrote:
>
> Callbacks are the way to go, but note that LC's callbacks won't work on Linux.
>
> Because there's no functioning LC player object for Linux at all.
>
> --
> Richard Gaskin
> Fourth World Systems
> Software Design and Development for the Desktop, Mobile, and the Web
> ____________________________________________________________________
> [hidden email]                http://www.FourthWorld.com
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
In reply to this post by Paul Dupuis via use-livecode
I held off contributing to this discussion because it sounded like callbacks
were a solid solution. However if that's not necessarily true it might be
worth thinking about text tracks.

This depends of course on what effect you want to achieve, and what platforms
you're targeting. But way back when (cue more CD-ROM nostalgia) we produced a
CD-ROM including some interviews. We put the transcript in a text track in
Quicktime, but hid the text track from the player, and intercepted it in code
so that we could present it in the way we wanted.

I don't think LC let's you do that, but it does let you enable and disable
tracks. So if you were happy with the default presentation of the text, that
might be a very straightforward solution.

Ben

On 12/02/2020 18:57, Devin Asay via use-livecode wrote:

> Tore,
>
> I would agree if callbacks were 100% reliable. I have tried them in the past and found that in some cases they were missed. I never had any trouble when using time indices. But I should say that I haven’t needed to do this for several years, and the callbacks in the new player object might be completely reliable.
>
> In other ways creating time indices makes your application more flexible, however. It’s dead simple, for instance, to set up an application where you can click on a line of text and play just that line. Set the startTime, set the endTime, set the playSelection to true, start playing. Done. That would be a little more challenging if all you had was callbacks.
>
> One of the great things about LiveCode is that there is almost always more than one way to do what you want.
>
> Regards,
>
> Devin
>
>
> On Feb 12, 2020, at 9:55 AM, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]>> wrote:
>
> Using callbacks negate the need to fiddle with duration or  timescales and start or stop times. It uses the sampling intervals as is, regardless of time. In my opinion it is much easier than trying to calculate start and end times. You can easily handle large audio/video files using callbacks. I would recommend using one file per poem though, this simplifies the handling of the messages sent from the player. You can basically use the same message for all files, resetting a counter variable each time you load a new file to handle with line you would like to act upon.
>
> You could also store the callbacks for each audio file in a text file and set the callbacks as a part of the handler used to load each audio file.
>
> Regards
> Tore
>
> 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <[hidden email]<mailto:[hidden email]>>:
>
> Graham,
>
> Take a look at the duration and the timeScale properties of player objects. By dividing duration by timeScale you get the length of the video in seconds.
>
>
> put the duration of player  “foo” / the timescale of player  “foo” into totalSeconds
>
> What you are contemplating is very doable, but you’ll have to do a fair amount of work to do to get the synching right. You can take one of several approaches:
>
> - Calculate times as above to predict when to show/highlight the next line. Can be tricky with long video files and rounding errors.
>
> - Check the currentTime property of the player to determine the startTime and endTime of each spoken line, and set the playSelection of the player to true. When the played segment ends, immediately load the following start and end times and play again. Something like this, from memory:
>
> set the startTime of player “foo” to 444
> set the endTime of player “foo” to 999
> set the currentTime of player “foo” to the startTime of player “foo”
> set the playerSelection of player “foo” to true
> start player “foo"
> - Break up the video or audio file into separate files, one line per file, then play each succeeding file when the previous one reaches its end. The playStopped message is your friend here.
>
> Like I said, it’s doable, but takes a bit of thought and planning, creating segment indexes, that sort of thing.
>
> Hope this helps.
>
> Devin
>
>
> On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>> wrote:
>
> Thanks, that’s a start - I will look at the dictionary. I suppose the callbacks rely on one analysing how long each line/word takes the performer to say. It’s a lot of work, but there’s no way around it since potentially every line takes a different length of time to recite. If it’s too much work, I guess I can just display the whole text and have one callback at the end of each recording. Maybe that is really the practical solution for a large body of work (say all the Shakespeare sonnets, for example).
>
> Anyway thanks for the hint.
>
> Graham
>
> On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>> wrote:
>
> You will have to use the callbacks property of the player to do what you want to do. The callbacks list would be your cues. From the dictionary:
>
> The callbacks of a player <> is a list of callbacks, one per line. Each callback consists of an interval number, a comma, and a message <> name.
>
>
> Regards
> Tore Nilsen
>
>
> 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <[hidden email]<mailto:[hidden email]><mailto:[hidden email]>>:
>
> Folks, forgive my ignorance, but it’s a long time since I considered the following and wondered what pitfalls there are.
>
> I have in mind a project where a recording of someone reading a poetry text (“old fashioned” poetry in metrical lines) needs to be synchronised to the display text itself on the screen, ideally so that a cursor or highlight would move from word to word with the speaker, although that would almost certainly involve too much work for the developer (me), or at least highlight lines as they are being spoken. I see that one would inevitably have to add cues to the spoken text file to fire off the highlighting, which is indeed an unavoidable amount of work, but can it be done at all in LC? For example, what form would the cues take?
>
> TIA
>
> Graham
>
>
> Devin Asay
> Director
> Office of Digital Humanities
> Brigham Young University
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode
>

_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
This would be so fun to work on, let us know what approach you used to get
the job done. Good luck.

On Thu, Feb 13, 2020 at 5:22 AM Ben Rubinstein via use-livecode <
[hidden email]> wrote:

> I held off contributing to this discussion because it sounded like
> callbacks
> were a solid solution. However if that's not necessarily true it might be
> worth thinking about text tracks.
>
> This depends of course on what effect you want to achieve, and what
> platforms
> you're targeting. But way back when (cue more CD-ROM nostalgia) we
> produced a
> CD-ROM including some interviews. We put the transcript in a text track in
> Quicktime, but hid the text track from the player, and intercepted it in
> code
> so that we could present it in the way we wanted.
>
> I don't think LC let's you do that, but it does let you enable and disable
> tracks. So if you were happy with the default presentation of the text,
> that
> might be a very straightforward solution.
>
> Ben
>
> On 12/02/2020 18:57, Devin Asay via use-livecode wrote:
> > Tore,
> >
> > I would agree if callbacks were 100% reliable. I have tried them in the
> past and found that in some cases they were missed. I never had any trouble
> when using time indices. But I should say that I haven’t needed to do this
> for several years, and the callbacks in the new player object might be
> completely reliable.
> >
> > In other ways creating time indices makes your application more
> flexible, however. It’s dead simple, for instance, to set up an application
> where you can click on a line of text and play just that line. Set the
> startTime, set the endTime, set the playSelection to true, start playing.
> Done. That would be a little more challenging if all you had was callbacks.
> >
> > One of the great things about LiveCode is that there is almost always
> more than one way to do what you want.
> >
> > Regards,
> >
> > Devin
> >
> >
> > On Feb 12, 2020, at 9:55 AM, Tore Nilsen via use-livecode <
> [hidden email]<mailto:[hidden email]>>
> wrote:
> >
> > Using callbacks negate the need to fiddle with duration or  timescales
> and start or stop times. It uses the sampling intervals as is, regardless
> of time. In my opinion it is much easier than trying to calculate start and
> end times. You can easily handle large audio/video files using callbacks. I
> would recommend using one file per poem though, this simplifies the
> handling of the messages sent from the player. You can basically use the
> same message for all files, resetting a counter variable each time you load
> a new file to handle with line you would like to act upon.
> >
> > You could also store the callbacks for each audio file in a text file
> and set the callbacks as a part of the handler used to load each audio file.
> >
> > Regards
> > Tore
> >
> > 12. feb. 2020 kl. 16:49 skrev Devin Asay via use-livecode <
> [hidden email]<mailto:[hidden email]>>:
> >
> > Graham,
> >
> > Take a look at the duration and the timeScale properties of player
> objects. By dividing duration by timeScale you get the length of the video
> in seconds.
> >
> >
> > put the duration of player  “foo” / the timescale of player  “foo” into
> totalSeconds
> >
> > What you are contemplating is very doable, but you’ll have to do a fair
> amount of work to do to get the synching right. You can take one of several
> approaches:
> >
> > - Calculate times as above to predict when to show/highlight the next
> line. Can be tricky with long video files and rounding errors.
> >
> > - Check the currentTime property of the player to determine the
> startTime and endTime of each spoken line, and set the playSelection of the
> player to true. When the played segment ends, immediately load the
> following start and end times and play again. Something like this, from
> memory:
> >
> > set the startTime of player “foo” to 444
> > set the endTime of player “foo” to 999
> > set the currentTime of player “foo” to the startTime of player “foo”
> > set the playerSelection of player “foo” to true
> > start player “foo"
> > - Break up the video or audio file into separate files, one line per
> file, then play each succeeding file when the previous one reaches its end.
> The playStopped message is your friend here.
> >
> > Like I said, it’s doable, but takes a bit of thought and planning,
> creating segment indexes, that sort of thing.
> >
> > Hope this helps.
> >
> > Devin
> >
> >
> > On Feb 12, 2020, at 5:28 AM, Graham Samuel via use-livecode <
> [hidden email]<mailto:[hidden email]
> ><mailto:[hidden email]>> wrote:
> >
> > Thanks, that’s a start - I will look at the dictionary. I suppose the
> callbacks rely on one analysing how long each line/word takes the performer
> to say. It’s a lot of work, but there’s no way around it since potentially
> every line takes a different length of time to recite. If it’s too much
> work, I guess I can just display the whole text and have one callback at
> the end of each recording. Maybe that is really the practical solution for
> a large body of work (say all the Shakespeare sonnets, for example).
> >
> > Anyway thanks for the hint.
> >
> > Graham
> >
> > On 12 Feb 2020, at 12:16, Tore Nilsen via use-livecode <
> [hidden email]<mailto:[hidden email]
> ><mailto:[hidden email]>> wrote:
> >
> > You will have to use the callbacks property of the player to do what you
> want to do. The callbacks list would be your cues. From the dictionary:
> >
> > The callbacks of a player <> is a list of callbacks, one per line. Each
> callback consists of an interval number, a comma, and a message <> name.
> >
> >
> > Regards
> > Tore Nilsen
> >
> >
> > 12. feb. 2020 kl. 11:25 skrev Graham Samuel via use-livecode <
> [hidden email]<mailto:[hidden email]
> ><mailto:[hidden email]>>:
> >
> > Folks, forgive my ignorance, but it’s a long time since I considered the
> following and wondered what pitfalls there are.
> >
> > I have in mind a project where a recording of someone reading a poetry
> text (“old fashioned” poetry in metrical lines) needs to be synchronised to
> the display text itself on the screen, ideally so that a cursor or
> highlight would move from word to word with the speaker, although that
> would almost certainly involve too much work for the developer (me), or at
> least highlight lines as they are being spoken. I see that one would
> inevitably have to add cues to the spoken text file to fire off the
> highlighting, which is indeed an unavoidable amount of work, but can it be
> done at all in LC? For example, what form would the cues take?
> >
> > TIA
> >
> > Graham
> >
> >
> > Devin Asay
> > Director
> > Office of Digital Humanities
> > Brigham Young University
> >
> > _______________________________________________
> > use-livecode mailing list
> > [hidden email]
> > Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> > http://lists.runrev.com/mailman/listinfo/use-livecode
> >
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your
> subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode
>


--
Tom Glod
Founder & Developer
MakeShyft R.D.A (www.makeshyft.com)
Mobile:647.562.9411
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
In reply to this post by Paul Dupuis via use-livecode
Hi Graham

I have an application created with LiveCode that uses callbacks from the player to synchronize annotations to the video played in the player.  I find the callbacks very reliable as far as sending the callback messages. Links are represented on a timeline by vertical lines. I have various types of annotation data attached to the link - text label, multi line text comment, linked video comment, color of link and some actions like stop main video, show linked video, play linked video.  As well there is a start and end time for a selection of the main video.  I have other types of annotation data on my planned feature list.  One is to have what your are talking about — a scrolling text field that would scroll to a certain point as specified in the annotation data.  I have some rough ideas of how I would implement it but haven’t gotten to it yet.

My application also captures the video and audio in .mov files using the new camera control.  These are saved in a project file that also contains a text file that contains all the callback times with the annotation data associated with the callback time. The application is cloud based so  projects can be shared with others users.

My initial target market  has been for sign language interpreter training and testing. However there is nothing preventing it being used used for spoken languages, I just have not been targeting that market yet.

In the past I did some experimenting and have opened audio files in the player and It worked.  I have not done that with LC 9.x so can’t say if that still works. (I am on a holiday so can’t try it out. I will try it out when I am back and let you know.)

With my application as it currently works the work flow I could see for your case is that the student creates a project and records themselves reading the poem.  Then they can open the project in annotation mode and create the links in the timeline at the points you want.  The text for each section of the poem could be entered into the multi line field in the links corresponding to each segment. When the video is played back its playback is can be automatically stopped each time a video link is triggered by a callback being fired and the text from that section is shown.  Once they are done the student could then share it with you in the cloud and you can then review the files from the students and add further comments.

This doesn’t do exactly what you want but you could use that to see how well the callbacks work in an application.  I would be interested in your thoughts on it after you give it a try

You can try out the application at VideoLinkwell.com.   I can set up a free trial for you, just put a note on the comment page https://videolinkwell.com/contact/ and I can set up a free account for you so you can download the software and try it out.

(Note I am in the midst of finishing off an upgrade so I am hoping to have a new version with new features and bug fixtures out in the near future.)

Martin

Sent from my iPad

> On Feb 12, 2020, at 1:03 PM, Graham Samuel via use-livecode <[hidden email]> wrote:
>
> Thanks Tore, Devin, Peter and Alex! There is a lot to chew on here. I do in fact have one file per poem - the user of the program will see each poem as different object, as it were, so there would be no advantage to combining them. I will try to do some experiments shortly. Doubtless after that there will be more questions.
>
> The issue of user platform preferences (desktop or app etc) which is discussed by Peter must be a universal one. I have previously experienced the gotcha of school labs not wanting to install applications. But I am getting far ahead of myself, since there are so many other issues to consider before i get near to making a proper platform decision.
>
> Graham
_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Hi

Forgot to say. The version on the website is Mac OSX only but the update I am working on includes a Windows version.

Martin

Sent from my iPhone

> On Feb 14, 2020, at 10:56 AM, KOOB via use-livecode <[hidden email]> wrote:
>
>
>
> (Note I am in the midst of finishing off an upgrade so I am hoping to have a new version with new features and bug fixtures out in the near future.)
>
> Martin
>
> Sent from my iPad
>
>> On Feb 12, 2020, at 1:03 PM, Graham Samuel via use-livecode <[hidden email]> wrote:
>>
>> Thanks Tore, Devin, Peter and Alex! There is a lot to chew on here. I do in fact have one file per poem - the user of the program will see each poem as different object, as it were, so there would be no advantage to combining them. I will try to do some experiments shortly. Doubtless after that there will be more questions.
>>
>> The issue of user platform preferences (desktop or app etc) which is discussed by Peter must be a universal one. I have previously experienced the gotcha of school labs not wanting to install applications. But I am getting far ahead of myself, since there are so many other issues to consider before i get near to making a proper platform decision.
>>
>> Graham
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode
Reply | Threaded
Open this post in threaded view
|

Re: Synchronisation of sound and vision

Paul Dupuis via use-livecode
Hi Martin

This is to thank you for your long message which deserves considerable study (somewhat held up this week by having to host - in support of my wife, mainly - five hungry French teenagers). I run my stuff on Macs using Parallels when a PC is called for, so I am happy with your version.

I’ll be back.

Graham

> On 14 Feb 2020, at 20:03, Martin Koob via use-livecode <[hidden email]> wrote:
>
> Hi
>
> Forgot to say. The version on the website is Mac OSX only but the update I am working on includes a Windows version.
>
> Martin
>
> Sent from my iPhone
>
>> On Feb 14, 2020, at 10:56 AM, KOOB via use-livecode <[hidden email]> wrote:
>>
>>
>>
>> (Note I am in the midst of finishing off an upgrade so I am hoping to have a new version with new features and bug fixtures out in the near future.)
>>
>> Martin
>>
>> Sent from my iPad
>>
>>> On Feb 12, 2020, at 1:03 PM, Graham Samuel via use-livecode <[hidden email]> wrote:
>>>
>>> Thanks Tore, Devin, Peter and Alex! There is a lot to chew on here. I do in fact have one file per poem - the user of the program will see each poem as different object, as it were, so there would be no advantage to combining them. I will try to do some experiments shortly. Doubtless after that there will be more questions.
>>>
>>> The issue of user platform preferences (desktop or app etc) which is discussed by Peter must be a universal one. I have previously experienced the gotcha of school labs not wanting to install applications. But I am getting far ahead of myself, since there are so many other issues to consider before i get near to making a proper platform decision.
>>>
>>> Graham
>> _______________________________________________
>> use-livecode mailing list
>> [hidden email]
>> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
>> http://lists.runrev.com/mailman/listinfo/use-livecode
>
>
> _______________________________________________
> use-livecode mailing list
> [hidden email]
> Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
> http://lists.runrev.com/mailman/listinfo/use-livecode


_______________________________________________
use-livecode mailing list
[hidden email]
Please visit this url to subscribe, unsubscribe and manage your subscription preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode