----------------------------------------------------------------------
Message-ID: <Pine.NEB.4.64.1812292302100.29435@panix5.panix.com>
Date: Sat, 29 Dec 2018 23:02:19 -0500 (EST)
From: danny burstein <dannyb@panix.com>
Subject: Cyber attack against newspapers
[twitter]
A cyberattack that appears to have originated from outside the United
States caused major printing and delivery disruptions at several
newspapers across the country on Saturday, including the Los Angeles
Times:
https://www.latimes.com/local/lanow/la-me-ln-times-delivery-breakdown-20181229-story.html
_____________________________________________________
Knowledge may be power, but communications is the key
dannyb@panix.com
[to foil spammers, my address has been double rot-13 encoded]
------------------------------
Message-ID: <q0bhje$svc$2@reader2.panix.com>
Date: 30 Dec 2018 22:41:50 +0000
From: "danny burstein" <dannyb@panix.com>
Subject: old NYC police "call boxes:, was: Nationwide internet
outage ...
In <q06v62$t2r$1@pcls7.std.com> "Michael Moroney"
<moroney@world.std.spaamtrap.com> writes:
[snip]
>That is correct. In New York City the system has been partially
>upgraded to voice call boxes in the old Gamewell mounts, but the old
>boxes still in use in some boroughs send 4 digit numbers as equally
>spaced pulses, not as Morse code. The boxes are wind-up and work not
>too differently from wind-up music boxes. Originally the pulses rang
>a bell and the dispatchers had to count pulses, these days a computer
>counts the pulses and enters the number into the dispatch system.
>Some boxes have/had telegraph keys inside so I assume at one time they
>did manually use Morse Code to call for additional assistance or
>otherwise report status to borough headquarters.
Not Morse Code, per say, but a series of "ten codes". For
example (made up here as I don't have the list at hand)
an officer would tap in four times for a fire truck
response, seven for an ambulance, etc.
--
_____________________________________________________
Knowledge may be power, but communications is the key
dannyb@panix.com
[to foil spammers, my address has been double rot-13 encoded]
------------------------------
Message-ID: <q0c0to$52a$1@dont-email.me>
Date: 30 Dec 2018 22:03:17 -0500
From: "Fred Goldstein" <fg_es@removeQRM.ionary.com>
Subject: T-Mobile US is now even more "Comcastic"
It used to be that cable companies were known for poor customer service.
Mobile carriers were more sensitive to competition. Cable hasn't really
improved. But T-Mobile this year has gone from top-shelf to subterranean
in service quality. Hence this tale of woe.
I recently moved my office, and I need to keep the phone number. Last
time I needed to move phone numbers to a neighboring rate center, this
past summer, I switched them to Google Voice, which forwards them to
wherever I want, including "invisible" numbers in the new physical rate
center. But you can't move a wireline number to Google Voice directly.
They just don't allow it. So the trick is to port the number to a mobile
carrier, then to Google. And everybody says T-Mobile is the easiest
mobile carrier to work with -- no contracts, no minimums, and easy SIMs
for sale. Last summer, it worked perfectly. I got a cheap unlocked
phone, got a SIM from T-M, ported in the number, forwarded the phone to
the proper destination, then a few days later let Google Voice port it
over. I had a little confusion with T-M over the no-data rate plan
price, but it got fixed.
I tried again last week and it was an entirely different situation. I
made one mistake, asking for prepaid. Turns out you can't forward calls
from a prepaid phone, though you can port it out again. However, T-M
made a HUGE error. The number they put on the shiny new SIM was one that
was in collection -- the previous owner was behind on a postpaid bill,
so whenever I tried to enter the number into T-M's phone jail, it got
diverted to a recording giving the collection agency's address! I spent
FOUR HOURS on the phone trying to get past that, but finally gave up,
went back to the store, and swapped the SIM for another, with a clean
number on it. And yes, Ms. Jackson, they did reveal the (CPNI) name of
the previous number holder.
Problem two is that T-M's support for prepaid is execrable. They have
two prepaid programs, "Legacy" and "Rebellion". New cards are in the
latter class. And they presumably think that prepaid customers are all
deadbeats anyway, so talking to a rep takes 45 minutes on hold if you
get through at all.
But with a clean number, I initiated a number port. I showed the old
phone bill, with address and account number, to the store clerk. That
was a Thursday. Ports from cable to mobile usually take one day (as did
my last two ports). But by this morning (Sunday), no response. So I call
in. Oh, the port was REJECTED by the old carrier, because the ZIP code
was wrong. Instead of my (old) ZIP code, they had put down the one for
Dorchester Center, where I'm guessing one of the store clerks lived. And
if there are ANY mismatches, the port fails. I asked them to resubmit
with the right ZIP; I have to wait for a response.
So after a couple of hours of no feedback, I have the store replace my
Rebellion SIM with a postpay one (credit check, ID check, etc.). And I
ask for the port to be restarted to the postpaid account. It takes a
while, but eventually it goes in to the system and I'm supposed to hear
back within a couple of hours.
Hearing nothing, I call in to the T-M porting center (1 877 789 3106).
The person there tells me that he will be the LAST one I'll need to
speak to, he'll get it all fixed. And waddayaknow, the port that I had
reinitiated this afternoon, to the postpaid account, had a DIFFERENT
wrong ZIP code on it! They replaced the "from" ZIP code with my new T-M
bill-to ZIP code. So he thinks he fixed it.
As night wears on, I call in again. Now the number is "stuck". The
original Rebellion port, rejected, is hanging up the number. And the
first guy on the phone sees yet another, nonexistent, ZIP code, which
that "last" guy might have left there. It takes two transfers and again
over an hour on the phone to Porting to get to someone who thinks he can
override that. And it takes him a while to get the right ZIP code. So
he's initiating yet another new request for a port.
At this point it's just waiting... and hoping that I can get RCN to keep
the number alive after I've left the site where the EMTA is. I will
leave it in place, the line forwarded, and hope that the landlord hasn't
got a new tenant coming right in. I've spent many hours on the phone
with T-M this week and it convinces me that the company has really
jumped the shark. They used to be the easiest to work with. None of
their phone employees are in the US (all in Guatemala and Phillipines)
and most just don't seem to care. They will "stay on the line" with you
while transferring you to endless hold, taking more calls to keep up
their scores. It's a mess. John Legere should be ashamed. He should be
paying attention to his company, not writing cookbooks.
Ironically, this all pointed to a bug in my VZW service. Someone from
T-M porting called me back and I just missed getting my Verizon
Blackberry to answer on time. I did not see a voice mail, though they
later told me they left one. Turns out that Verizon VM to the Blackberry
KeyONE doesn't work any more! A security fix broke the message waiting
indicator. I found the fix on Crackberry. It required a call to VZ and a
level of escalation to second-level support, to change the VM to "basic
IMS", not "visual IMS", which no longer works on that model. But the
second-level support person in the midwestern US knew how to do that,
fixed it, and made a test call, I got the message, and all's well with
VZ after a 15-minute call. That's support, and one reason (beside the
fact that the signal is much better!) why I stick with VZW even though I
don't touch VZ wireline.
------------------------------
Message-ID: <20181231043943.GA8218@telecom.csail.mit.edu>
Date: Sun, 30 Dec 2018 23:39:43 -0500
From: Bill Horne <bill@horneQRM.net>
Subject: Possible Centurylink Outage Report
This appears to be an outage report from Centurylink, but I can't
veryify its authenticity. I had to substitute ASCII for some
multi-byte characters.
Bill Horne
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* Event Conclusion Summary * Outage Start: December 27, 2018 08:40 GMT
Outage Stop: December 29, 2018 10:12 GMT Root Cause: A CenturyLink
network management card in Denver, CO was propagating invalid frame
packets across devices. Fix Action: To restore services the card in
Denver was removed from the equipment, secondary communication
channel tunnels between specific devices were removed across the
network, and a polling filter was applied to adjust the way the
packets were received in the equipment. As repair actions were
underway, it became apparent that additional restoration steps were
required for certain nodes, which included either line card resets
or Field Operations dispatches for local equipment login. Once
completed, all services restored. RFO Summary: On December 27, 2018
at 08:40 GMT, CenturyLink identified an initial service impact in
New Orleans, LA. The NOC was engaged to investigate the cause, and
Field Operations were dispatched for assistance onsite. Tier IV
Equipment Vendor Support was engaged as it was determined that the
issue was larger than a single site. During cooperative trouble-
shooting between the Equipment Vendor and CenturyLink, a decision
was made to isolate a device in San Antonio, TX from the network as
it seemed to be broadcasting traffic and consuming capacity. This
action did alleviate impact; however, investigations remained
ongoing. Focus shifted to additional sites where network teams were
unable to remotely troubleshoot equipment. Field Operations were
dispatched to sites in Kansas City, MO, Atlanta, GA, New Orleans, LA
and Chicago, IL for onsite support. As visibility to equipment was
regained, Tier IV Equipment Vendor Support evaluated the logs to
further assist with isolation. Additionally, a polling filter was
applied to the equipment in Kansas City, MO and New Orleans, LA to
prevent any additional effects. All necessary troubleshooting teams,
in cooperation with Tier IV Equipment Vendor Support, were working
to restore remote visibility to the remaining sites. The issue had
CenturyLink Executive level awareness for the duration. A plan was
formed to remove secondary communication channels between select
network devices until visibility could be restored, which was
undertaken by the Tier IV Equipment Vendor Technical Support team in
conjunction with CenturyLink Field Operations and NOC engineers.
While that effort continued, investigations into the logs, including
packet captures, was occurring in tandem, which ultimately
identified a suspected card issue in Denver, CO. Field Operations
were dispatched to remove the card. Once removed, it did not appear
there had been significant improvement; however, the logs were
further scrutinized by the Vendor's Advanced Support team and
CenturyLink Network Operations to identify that the source packet
did originate from this card. CenturyLink Tier III Technical Support
shifted focus to the application of strategic polling filters along
with the continued efforts to remove the secondary communication
channels between select nodes. Services began incrementally
restoring. An estimated restoral time of 09:00 GMT was provided;
however, as repair efforts steadily progressed, additional steps
were identified for certain nodes that impeded the restoration
process. This included either line card resets or Field Operations
dispatches for local equipment login. Various repair teams worked in
tandem on these actions to ensure that services were restored in the
most expeditious method available. By 2:30 GMT on December 29, it
was confirmed that the impacted IP, Voice, and Ethernet Access
services were once again operational. Point-to-point Transport Waves
as well as Ethernet Private Lines were still experiencing issues as
multiple Optical Carrier Groups (OCG) were still out of service. The
Transport NOC continued to work with the Tier IV Equipment Vendor
Support and CenturyLink Field Operations to replace additional line
cards to resolve the OCG issues. Several cards had to be ordered
from the nearest sparing depot. Once the remaining cards were
replaced it was confirmed that all services except a very small set
of circuits had restored, and the Transport NOC will continue to
troubleshoot the remaining impacted services under a separate
Network Event. Services were confirmed restored at 10:12 GMT. Please
contact the Repair center to address any lingering service issues.
Additional Information: Please note that as formal post incident
investigations and analysis occur the details relayed here may
evolve. Locating the management card in Denver, CO that was sending
invalid frame packets across the network took significant analysis
and packet captures to be identified as a source as it was not in an
alarm status. The CenturyLink network continued to rebroadcast the
invalid packets through the redundant (secondary) communication
routes. CenturyLink will review troubleshooting steps to ensure that
any areas of opportunity regarding potential for restoral accelera-
tion are addressed. These invalid frame packets did not have a
source, destination, or expiration and were cleared out of the
network via the application of the polling filters and removal of
the secondary communication paths between specific nodes. The
management card has been sent to the equipment vendor where
extensive forensic analysis will occur regarding the underlying
cause, how the packets were introduced in this particular
manner. The card has not been replaced and will not be until the
vendor review is supplied. There is no increased network risk with
leaving it unseated. At this time, there is no indication that there
was maintenance work on the card, software, or adjacent equip-
ment. The CenturyLink network is not at risk of reoccurrence due to
the placement of the poling filters and the removal of the secondary
communication routes between select nodes.
* 2018-12-29 12:48:18 GMT - The Transport NOC continues to monitor the
network to ensure impacted services have remained restored and
stable. If additional issues are experienced, please contact the
CenturyLink Repair Center. A final notification will be provided
momentarily.
* 2018-12-29 11:56:08 GMT - The Transport NOC advises Field Operations
has replaced the impacted cards. The affected Optical Carrier G
roups have stabilized, thus all service affecting alarms have
cleared and impacted services have restored. The Transport NOC has
identified and is aware of a smaller set of services that have not
restored and will continue to investigate and resolve those services
under an alternate Network Event. The Transport NOC and equipment
vendor are continuing to monitor for network stability; Additional
Field Operations have been dispatched to clear the remaining Optical
Carrier Groups that are still out of service and cannot be restored
remotely.
* 2018-12-29 01:25:17 GMT - The Transport NOC continues to work with
the Equipment Vendor's Support Teams to investigate multiple Opti-
cal Carrier Groups that are still out of service impacting Point to
Point Transport Waves as well as Ethernet Private Lines. Both
CenturyLink and the Equipment Vendor's Field Operations teams
have dispatched to the necessary sites to assist with isolation.
Additional cards have been ordered and shipped to sites across the
United States in an effort to restore the Optical Carrier Groups to
complete full network restoral.
* 2018-12-29 00:31:23 GMT - Field Operations in cooperation with the
Engineering teams have repaired the span traversing the western U
nited States through loop testing. Once the equipment was restored,
additional capacity was in turn available to the span on the
CenturyLink Network. IP, Voice, and Ethernet Access services are
expected to have restored with the now available capacity. Point-
to-Point Transport Waves as well as Ethernet Private Lines may still
experience issues while the remainder of the final card issues are
resolved. Lingering latency may be present, which is anticipated to
subside as routing continues to normalize. If issues are still being
experienced with your IP, Voice, and Ethernet Access services please
contact the CenturyLink Repair Center.
* 2018-12-28 23:02:29 GMT - As the Equipment Vendor and CenturyLink
Engineering teams continue to work to clear the lingering card iss
ues it has been confirmed that alarms continue to clear, and network
capacity is being restored. Efforts will remain ongoing to continue
to resolve any further issues identified.
* 2018-12-28 21:42:05 GMT - The Transport NOC has confirmed that
visibility has been restored to all nodes, allowing triage of the
add itional cards to be completed. Engineering continues to review
the network to identify, review, and clear the remaining alarms and
issues observed. Field Operations continue to remain on standby and
dispatch to sites as necessary to assist with isolation and
resolution.
* 2018-12-28 20:31:40 GMT - Efforts to complete the line card resets
remain ongoing, while additional support teams continue to triage
chassis within a smaller set of nodes that did not have full visi-
bility restored as well as additional line cards within the network.
The highest level of Engineering support from both the Equipment
Vendor as well as CenturyLink continue to diligently work to restore
services.
* 2018-12-28 19:27:05 GMT - CenturyLink Engineering in cooperation
with the Equipment Vendor's Tier IV Support continue to systema-
tically review the network alarms and triage line cards within the
network to ensure remote resets or physically reseats on site can be
completed.
* 2018-12-28 18:23:33 GMT - The Transport NOC has confirmed that
visibility has been restored to the majority of the network outside
of a few remaining nodes that are in various states of recovery.
Engineering has identified the line cards that will need to be reset
and are working diligently to perform the necessary actions to bring
all cards back online
* 2018-12-28 17:15:20 GMT - It has been confirmed that visibility has
been restored to the majority of the nodes across the network.
Field Operations have been dispatched to assist with recovering
visibility to the few remaining nodes. Engineering is working to
systematically review the network alarms on the other nodes and are
then performing remote manual resets to individual cards that remain
in alarm. Reinstate times for each card may vary significantly, as
such an estimated completion time is not yet available. If cards do
not automatically reinstate after remote resets complete, Field
Operations are standing by to dispatch as needed. The Equipment
Vendor's Tier IV team continues to assist with the resolution
efforts
* 2018-12-28 13:35:00 GMT - Efforts by the Equipment Vendor and
CenturyLink engineers to apply the filters and remove the secondary
co mmunication channels in the network continue. The previously
provided ETR of 09:00 GMT remains.
* 2018-12-28 13:27:30 GMT - The Equipment Vendor and CenturyLink
engineers continue work to apply the filters and remove the
secondary communication channels. Field Operations and Equipment
Vendor dispatches to recover nodes locally remain underway. Services
continue to restore in a steady manner as troubleshooting progresses
following the recovery of nodes. CenturyLink NOC management remains
in contact with the equipment vendor to obtain updates as restora-
tion efforts continue.
* 2018-12-28 11:04:24 GMT - CenturyLink continues to work with the
Equipment Vendor to apply the filters and remove the secondary comm
unication channels. Field Operations and Equipment Vendor dispatches
to recover nodes locally remain underway. Client services continue
to restore in a steady manner as troubleshooting progresses
following the recovery of nodes.
* 2018-12-28 10:05:18 GMT - CenturyLink NOC Management reports
steady progression of node recovery and restoral of client services.
In addition to the remote node recovery process, Field Operations
continue to dispatch and assist the Equipment Vendor with local
equipment login.
* 2018-12-28 08:51:29 GMT - CenturyLink NOC Management has advised
that repair efforts are steadily progressing, and services are incrementally
restoring. The Equipment Vendor and CenturyLink engineers continue work
to apply the filters and remove the secondary communication channels at
this time. There have been additional restoration steps identified for cer-
tain nodes, which includes either line card resets or Field Operations dis-
patches for local equipment login, that have impeded the restoration process.
Various repair teams are working in tandem on these actions to ensure that
services are restored in the most expeditious method available. Restoration
efforts are ongoing.
* 2018-12-28 07:12:32 GMT - Efforts by the Equipment Vendor and
CenturyLink engineers to apply the filters and remove the secondary
co mmunication channels in the network continue. Additional
information on repair progress will be available from the Equipment
Vendor by 07:30 GMT. Information will be relayed as soon as it is
obtained.
* 2018-12-28 06:00:01 GMT - Efforts by the Equipment Vendor and
CenturyLink engineers to apply the filters and remove the secondary
co mmunication channels in the network continue. The previously
provided ETR of 09:00 GMT remains.
* 2018-12-28 04:58:44 GMT - CenturyLink engineers in conjunction with
the Equipment Vendor's Tier IV Technical Support team have identi-
fied the elements causing the impact to customer services. Through
the filters being applied and the removal of the secondary communi-
cation channels, it is anticipated services will be fully restored
within four hours. We apologize for any inconvenience this caused
our customers. Additional details regarding details of the
underlying cause will be relayed as available.
* 2018-12-28 04:09:31 GMT - The Equipment Vendor's Tier IV Technical
Support team in conjunction with CenturyLink Tier III Technical
Support continues to remotely work to remove the secondary communi-
cation channel tunnels across the network until full visibility can
be restored, as well as applying the necessary polling filter to
each of the reachable nodes.
* 2018-12-28 02:53:38 GMT - The Transport NOC has confirmed that
cooperative efforts remain ongoing to remove the secondary communi-
cation channel tunnel across the network until full visibility can
be restored, as well as applying the necessary filter to each of the
reachable nodes. It has been confirmed that both of these actions
are being performed remotely, but an estimated time to complete the
activities is not available at this time.
* 2018-12-28 01:58:56 GMT - Once the card was removed in Denver, CO it
was confirmed that there was no significant improvement. Additi onal
packet captures, and logs will be pulled from the device with the
card removed to further isolate the root cause. The Equipment vendor
continues to work with CenturyLink Field Operations at multiple
sites to remove the secondary communication channel tunnel across
the network until full visibility can be restored. The equipment
vendor has identified a number of additional nodes that visibility
has been restored to, and their engineers are currently working to
apply the necessary filter to each of the reachable nodes.
* 2018-12-28 00:59:04 GMT - Following the review of the logs and
packet captures, the Equipment Vendor's Tier IV Support team has
iden tified a suspected card issue in Denver, CO. Field Operations
has arrived on site and are working in cooperation with the
Equipment Vendor to remove the card.
* 2018-12-27 23:57:16 GMT - The Equipment Vendor is currently
reviewing the logs and packet captures from devices that have been
compl eted, while logs and packet captures continue to be pulled
from additional devices. The necessary teams continue to remove a
secondary communication channel tunnel across the network until
visibility can be restored. All technical teams continue to
diligently work to review the information obtained in an effort to
isolate the root cause.
* 2018-12-27 22:52:43 GMT - Multiple teams continue work to pull
additional logs and packet captures on devices that have had
visibili ty restored, which will be scrutinized during root cause
analysis. The Tier IV Equipment Vendor Technical Support team in
conjunction with Field Operations are working to remove a secondary
communication channel tunnel across the network until visibility can
be restored. The Equipment Vendor Support team has dispatched their
Field Operations team to the site in Chicago, IL and has been
obtaining data directly from the equipment.
* 2018-12-27 21:35:55 GMT - It has been advised that visibility has
been restored to both the Chicago, IL and Atlanta, GA sites. Engin
eering and Tier IV Equipment Vendor Technical Support are currently
working to obtain additional logs from devices across multiple sites
including Chicago and Atlanta to further isolate the root cause.
* 2018-12-27 21:01:26 GMT - On December 27, 2018 at 02:40 GMT,
CenturyLink identified a service impact in New Orleans, LA. The NOC
was engaged and investigating in order to isolate the cause. Field
Operations were engaged and dispatched for additional investiga-
tions. Tier IV Equipment Vendor Support was later engaged. During
cooperative troubleshooting a device in San Antonio, TX was isolated
from the network as it was seeming to broadcast traffic consuming
capacity, which seemed to alleviate some impact. Investigations
remained ongoing. Following the isolation of the San Antonio, TX
device troubleshooting efforts focused on additional sites that
teams were remotely unable to troubleshoot. Field Operations were
dispatched to sites in Kansas City, MO, Atlanta, GA, New Orleans, LA
and Chicago, IL. Tier IV Equipment Vendor Support continued to
investigate the equipment logs to further assist with isolation.
Once visibility was restored to the site in Kansas City, MO and New
Orleans, LA a filter was applied to the equipment to further
alleviate the impact observed. All of the necessary troubleshooting
teams in cooperation with Tier IV Equipment Vendor Support are
working to restore remote visibility to the remaining sites at this
time. Tier IV Equipment Vendor Technical Support continues to review
equipment logs from the sites where visibility was previously
restored. We understand how important these services are to our
clients and the issue has been escalated to the highest levels
within CenturyLink Service Assurance Leadership.
https://fuckingcenturylink.com/
***** Moderator's Note *****
This notice doesn't mention 911. That's puzzling: there were outages
of 911 service in many areas, although they are reported as being
limited to cellular users.
The report inplies that a fault occured in several high-capacity
MUXes, which IIRC wouldn't ususally be used to carry 911 traffic. My
experience was all in wireline, so I'll ask those of you who work in
the mobile world if Centurylink is allowed to have mobile switches
carry traffic across LATA boundaries.
Bill Horne
Moderator
--
Bill Horne
(Remove QRM from my email address to write to me directly)
------------------------------
*********************************************
End of telecom Digest Tue, 01 Jan 2019