Of course I naturally assumed the Unix timestamps were the number of seconds since the epoch. Never mind any leap seconds and such.
That's the problem -- Unix "doesn't mind leap seconds", but leap seconds exist, and ignoring them has been the source of many bugs. It's like ignoring leap years, or thinking that 2 digits is enough to hold the year -- in the short run everything is fine, but eventually you'll get bitten.
For example, at midnight UTC on 2016-10-01 (October first) Linux reported a time_t of 1475280000. But in fact there have been 26 leap seconds, so at least1475280026 seconds elapsed between Jan 1. 1970 and Oct. 1 2016 (I say "at least" because prior to 1972 UTC did not use the SI second definition, so I'm not sure exactly how many SI seconds elapsed in 1970 and 1971).
I don't get the point about differentiating between "23:59:60 of one day and 00:00:00 of the next". Surely if you are counting hours minutes and seconds there is no 23:59:60. Minutes and seconds are in base 60 so you can only count to 23:59:59. The next tick is 00:00:00.
Ah, but that's where you're wrong -- a leap second is by definition 23:59:60 (UTC) of the day it is inserted. So for example this coming Dec. 31 the sequence of seconds in UTC will be:
but the times marked (A) and (B) will have the same time_t value -- in other words one time_t value will persist for two actual elapsed seconds. This can cause issues for time sensitive systems, to say the least.
The root of the problem is that the second was originally defined as 1/86400 of a day, but it turns out that the length of a day is not constant -- the Earth's rotation is slightly variable, and modern clocks are more than accurate enough to detect this. Nowadays the second is defined in terms of a particular frequency of radiation emitted by cesium atoms. But UTC still requires that noon be when the sun is directly overhead, and so in order to synchronize clocks (ticking SI seconds) with the sun we have to insert leap seconds.
POSIX basically just buries its head in the sand and tries to pretend that leap seconds don't exist. None of the standard APIs has a coherent way for applications to detect them, and time_t is explicitly defined in terms of 86400 * number of days (rather than the original intention that it would be "seconds elapsed since the epoch).
There's one more thing about Unix time - it's UTC, wherever you are (unless you mess up with your hardware clock settings, typically on a dual-boot system with the broken Windows time handling). So local timezone and DST are handled at the point where the time is displayed in human format, e.g. when you want to look at the time stamp of a file. The library translates the timestamp to local time in year/month/day/hour/min/sec format and applies the timezone offset, and DST if applicable.
In my opinion that's where leap seconds should be applied as well, because when you handle leap seconds by *changing* the time reference you've lost. You can't reliably convert back, you see (as I found out when somebody asked me to implement a reverse GPS leapsecond adjustment function. Just as when my Palm PDA automatically adjusts for DST twice a year, when you're inside where the change happens you can't tell if it's been done already or not.
So, to conclude, the seconds since epoch shouldn't have been with reference to UTC, it should have been TAI - International Atomic Time. It's always incrementing, never taking a breather the way UTC does.
And, in any case, I don't see why we can't, at this point, go to Planck time ticks since the ultimate epoch - the Big Bang. That's entirely feasible now, surprisingly.
This is very disturbing Eric. And gets rather complicated.
I want my Unix time to be a monotonically increasing count of seconds since some reference time. It should increment accurately every second. It should not have any weird fits and starts. Basically as if it were a binary counter driven from a super accurate atomic clock.
Leap years and seconds and timezones and all that should not have any place in that basic time counting mechanism.
Disturbing because years ago I worked on a secure military packet radio system. We spent ages verifying how leap years worked and checking our implementation. At least for the next hundred years! We had a stringent requirement to make our packet timing the same as some American implementation of the same protocol. Such that any enemy listening could not tell if they were listening to Brits or Yanks. Now you are suggesting that our timing could have hiccuped when leap seconds happen. That would not be a problem if the American devices hiccuped the same way. But who would what they do?
Aside: That American protocol implementation was terrible buggy. At least it did not match the specifications we had. It was a terrible job making our transmissions look like theirs!
I want my Unix time to be a monotonically increasing count of seconds since some reference time. It should increment accurately every second. It should not have any weird fits and starts. Basically as if it were a binary counter driven from a super accurate atomic clock.
I agree -- that's what I always thought time_t was supposed to be, and if it had been implemented that way (as Tor suggested, by tracking TAI rather than UTC) then I think the leap second problem would be far more tractable. I was horrified when I found out that the POSIX standard definition of "seconds since the epoch" reads something like:
4.16 Seconds Since the Epoch
A value that approximates the number of seconds that have elapsed since the Epoch. A Coordinated Universal Time name (specified in terms of seconds (tm_sec), minutes (tm_min), hours (tm_hour), days since January 1 of the year (tm_yday), and calendar year minus 1900 (tm_year)) is related to a time represented as seconds since the Epoch, according to the expression below.
If the year is <1970 or the value is negative, the relationship is undefined. If the year is >=1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the C-language expression, where tm_sec, tm_min, tm_hour, tm_yday, and tm_year are all integer types:
tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
(tm_year-70)*31536000 + ((tm_year-69)/4)*86400 -
((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400
Note the word "approximates" in the first sentence. Aargh! And note that each and every day is assumed to be 86400 seconds long (and the standard further explicitly requires this later on) even though in fact days are ~86400.001 seconds long (see https://timeanddate.com/time/earth-rotation.html).
There is a set of timezone databases (the "right" timezones) which incorporate leap seconds into the time zone conversion files so that the kernel can keep time in atomic time (time_t is a true count of atomic seconds since the epoch) and user programs can get correct civil time. Systems using these timezones are not technically POSIX compliant, but in practice I think it's the correct solution.
So basically the Posix standards guys just cut and pasted whatever code was in whatever Unix version they were looking at and declared it a standard. Great.
so not enough space to run 129K DRI COBOL for CP/M-80 I guess.
CP/M-80 only supported 64K anyway, and most of the systems running it had less than that available in practice (due to bios code and whatnot). 56K was considered a lot. On newer systems, particularly emulated ones, it's possible to squeeze the bios code down: instead of a lot of native (8080) assembly code to handle disk sectors, you may be able to reduce that to just a single call which hooks into your emulator's high level functions. So you'll see modern 62K CP/M systems, and sometimes even 64K - that's probably playing even more tricks.
CP/M 3, aka CP/M Plus, could use more RAM, with bank switching and a small common RAM area. These would typically have 128K RAM - not sure if CP/M 3 could handle more than 128K. I've not seen any systems with more. I could look it up I guess, I have the docs somewhere (from the absolutely great CP/M Plus hardware that disappeared before I knew what was happening).
(As far as I know the term CP/M-80 (without any version specifier) was only used when it was CP/M-2.x compatible).
In any case, the 129K size is just the zipfile, it contains many files. The compiler is smaller. But it may be using overlays (parts residing on disk, paged in and out as needed).
The ZiCog Z80 emulator used CP/M and other software from the Altair simulator, based on simh, by Peter Schorn. http://schorn.ch/altair.html
On that site you will find Microsoft MS-COBOL Version 4.65.
For fun your could install that emulator to your PC and see if COBOL runs in 64K.
My guess is that it will.
That emulator also has CP/M 3 and can bank switch 16 banks of 64K ! I don't know if anyone ever got that working on ZiCog. I did not want to go there.
If your search the forum you will find the excellent qz80 by Pullmoll. He did have CP/M 3 running but last I heard it was crashing at random for some unknown reason. He had built lot's more on his emulator to support emulating Sinclair machines, not something I wanted to get into.
Sadly I have not had a working ZiCog emulator running here for ages. I kind of got put on hold while waiting for the P2...then life moved on...
Sounds like your best bet is to get a RamBlade and ZiCog from Cluso.
>That emulator also has CP/M 3 and can bank switch 16 banks of 64K !
Well, that answered my question! As I said, I don't think I've seen physical hardware with more than two banks (128K), but that doesn't mean that it didn't exist.
Yeah, 128K was probably the max for memory on real CP/M machines. Memory was expensive.
However the CP/M has some surprises.
It's BIOS/BDOS could handle a maximum of 8 hard drives with a total capacity up in the 100's of mega bytes (I don't think it reached a gig). That's orders of magnitude more than anyone ever had in those days. (Numbers are vague here, I don't recall exactly)
Quite likely the BIOS/BDOS could support 16 banks of 64K. But that is a megabyte. Unheard of at the time.
Comments
For example, at midnight UTC on 2016-10-01 (October first) Linux reported a time_t of 1475280000. But in fact there have been 26 leap seconds, so at least1475280026 seconds elapsed between Jan 1. 1970 and Oct. 1 2016 (I say "at least" because prior to 1972 UTC did not use the SI second definition, so I'm not sure exactly how many SI seconds elapsed in 1970 and 1971).
Ah, but that's where you're wrong -- a leap second is by definition 23:59:60 (UTC) of the day it is inserted. So for example this coming Dec. 31 the sequence of seconds in UTC will be:
2016-12-31 23:59:58
2016-12-31 23:59:59
2016-12-31 23:59:60 (A)
2017-01-01 00:00:00 (B)
2017-01-01 00:00:01
but the times marked (A) and (B) will have the same time_t value -- in other words one time_t value will persist for two actual elapsed seconds. This can cause issues for time sensitive systems, to say the least.
The root of the problem is that the second was originally defined as 1/86400 of a day, but it turns out that the length of a day is not constant -- the Earth's rotation is slightly variable, and modern clocks are more than accurate enough to detect this. Nowadays the second is defined in terms of a particular frequency of radiation emitted by cesium atoms. But UTC still requires that noon be when the sun is directly overhead, and so in order to synchronize clocks (ticking SI seconds) with the sun we have to insert leap seconds.
POSIX basically just buries its head in the sand and tries to pretend that leap seconds don't exist. None of the standard APIs has a coherent way for applications to detect them, and time_t is explicitly defined in terms of 86400 * number of days (rather than the original intention that it would be "seconds elapsed since the epoch).
Eric
In my opinion that's where leap seconds should be applied as well, because when you handle leap seconds by *changing* the time reference you've lost. You can't reliably convert back, you see (as I found out when somebody asked me to implement a reverse GPS leapsecond adjustment function. Just as when my Palm PDA automatically adjusts for DST twice a year, when you're inside where the change happens you can't tell if it's been done already or not.
So, to conclude, the seconds since epoch shouldn't have been with reference to UTC, it should have been TAI - International Atomic Time. It's always incrementing, never taking a breather the way UTC does.
And, in any case, I don't see why we can't, at this point, go to Planck time ticks since the ultimate epoch - the Big Bang. That's entirely feasible now, surprisingly.
I want my Unix time to be a monotonically increasing count of seconds since some reference time. It should increment accurately every second. It should not have any weird fits and starts. Basically as if it were a binary counter driven from a super accurate atomic clock.
Leap years and seconds and timezones and all that should not have any place in that basic time counting mechanism.
Disturbing because years ago I worked on a secure military packet radio system. We spent ages verifying how leap years worked and checking our implementation. At least for the next hundred years! We had a stringent requirement to make our packet timing the same as some American implementation of the same protocol. Such that any enemy listening could not tell if they were listening to Brits or Yanks. Now you are suggesting that our timing could have hiccuped when leap seconds happen. That would not be a problem if the American devices hiccuped the same way. But who would what they do?
Aside: That American protocol implementation was terrible buggy. At least it did not match the specifications we had. It was a terrible job making our transmissions look like theirs!
Note the word "approximates" in the first sentence. Aargh! And note that each and every day is assumed to be 86400 seconds long (and the standard further explicitly requires this later on) even though in fact days are ~86400.001 seconds long (see https://timeanddate.com/time/earth-rotation.html).
There is a set of timezone databases (the "right" timezones) which incorporate leap seconds into the time zone conversion files so that the kernel can keep time in atomic time (time_t is a true count of atomic seconds since the epoch) and user programs can get correct civil time. Systems using these timezones are not technically POSIX compliant, but in practice I think it's the correct solution.
Eric
Hmm...the right timezone databases. You mean like here: https://www.ucolick.org/~sla/leapsecs/right+gps.html ?
Boy, what a mess that describes!
I found a COBOL compiler for CP/M. On this site http://www.cpm.z80.de/binary.html
The file is a 129K DRI COBOL for CP/M-80 download is at http://www.cpm.z80.de/download/cobol80.zip
So how again can I run CP/M on a propeller? Doesn't Cluso99 sold some Boards running CP/M?
Any help welcome,
Mike
But anyways, how much I have to send you via PayPal to get a RamBlade running CP/M send to Clearlake Oaks, CA 95423?
Mike
CP/M 3, aka CP/M Plus, could use more RAM, with bank switching and a small common RAM area. These would typically have 128K RAM - not sure if CP/M 3 could handle more than 128K. I've not seen any systems with more. I could look it up I guess, I have the docs somewhere (from the absolutely great CP/M Plus hardware that disappeared before I knew what was happening).
(As far as I know the term CP/M-80 (without any version specifier) was only used when it was CP/M-2.x compatible).
In any case, the 129K size is just the zipfile, it contains many files. The compiler is smaller. But it may be using overlays (parts residing on disk, paged in and out as needed).
The ZiCog Z80 emulator used CP/M and other software from the Altair simulator, based on simh, by Peter Schorn. http://schorn.ch/altair.html
On that site you will find Microsoft MS-COBOL Version 4.65.
For fun your could install that emulator to your PC and see if COBOL runs in 64K.
My guess is that it will.
That emulator also has CP/M 3 and can bank switch 16 banks of 64K ! I don't know if anyone ever got that working on ZiCog. I did not want to go there.
If your search the forum you will find the excellent qz80 by Pullmoll. He did have CP/M 3 running but last I heard it was crashing at random for some unknown reason. He had built lot's more on his emulator to support emulating Sinclair machines, not something I wanted to get into.
Sadly I have not had a working ZiCog emulator running here for ages. I kind of got put on hold while waiting for the P2...then life moved on...
Sounds like your best bet is to get a RamBlade and ZiCog from Cluso.
Well, that answered my question! As I said, I don't think I've seen physical hardware with more than two banks (128K), but that doesn't mean that it didn't exist.
However the CP/M has some surprises.
It's BIOS/BDOS could handle a maximum of 8 hard drives with a total capacity up in the 100's of mega bytes (I don't think it reached a gig). That's orders of magnitude more than anyone ever had in those days. (Numbers are vague here, I don't recall exactly)
Quite likely the BIOS/BDOS could support 16 banks of 64K. But that is a megabyte. Unheard of at the time.
IIRC, the latest ZiCog has 8 X 8MB HDDs.