Landscape with live clients cannot handle a DB restore to a point in the past.
The scenario is Landscape running as usual, with live clients, restoring to a DB backup taken in the past. After the service ir brought up again with this data, clients will start resyncing and becoming wedged with all sorts of tracebacks on the message server.
I left such a scenario running overnight, hoping that eventually the resyncs would settle down and everything recover, but that didn't happen. The resyncs continued.
An interesting one in particular was this:
Aug 22 21:46:26 message-server-2 ERR Error handling message 'operation-result' for computer 104: {'status': 6, 'timestamp': 1471901963, 'result-text': u'Mon Aug 22 21:39:23 UTC 2016\n', 'api': '3.3', 'operation-id': 533, 'type': 'operation-result'}#012Traceback (most recent call last):#012 File "/opt/canonical/landscape/canonical/landscape/message/apis.py", line 358, in _process_messages#012 self.handle(message["type"], message)#012 File "/opt/canonical/landscape/canonical/message/api.py", line 66, in handle#012 return handler(type, body)#012 File "/opt/canonical/landscape/canonical/message/handler.py", line 30, in __call__#012 return function(self.message_api, type, body)#012 File "/opt/canonical/landscape/canonical/lib/arguments.py", line 79, in replacement#012 return original(*new_args, **new_kwargs)#012 File "/opt/canonical/landscape/canonical/landscape/message/handlers/activity.py", line 32, in handle_activity_result#012 activity.succeed(code=result_code, text=result_text)#012AttributeError: 'NoneType' object has no attribute 'succeed'
That was about an activity that had been delivered already, but did not exist in the restored DB.
Landscape with live clients cannot handle a DB restore to a point in the past.
The scenario is Landscape running as usual, with live clients, restoring to a DB backup taken in the past. After the service ir brought up again with this data, clients will start resyncing and becoming wedged with all sorts of tracebacks on the message server.
I left such a scenario running overnight, hoping that eventually the resyncs would settle down and everything recover, but that didn't happen. The resyncs continued.
An interesting one in particular was this: result' }#012Traceback (most recent call last):#012 File "/opt/canonical /landscape/ canonical/ landscape/ message/ apis.py" , line 358, in _process_ messages# 012 self.handle( message[ "type"] , message)#012 File "/opt/canonical /landscape/ canonical/ message/ api.py" , line 66, in handle#012 return handler(type, body)#012 File "/opt/canonical /landscape/ canonical/ message/ handler. py", line 30, in __call__#012 return function( self.message_ api, type, body)#012 File "/opt/canonical /landscape/ canonical/ lib/arguments. py", line 79, in replacement#012 return original(*new_args, **new_kwargs)#012 File "/opt/canonical /landscape/ canonical/ landscape/ message/ handlers/ activity. py", line 32, in handle_ activity_ result# 012 activity. succeed( code=result_ code, text=result_ text)#012Attrib uteError: 'NoneType' object has no attribute 'succeed'
Aug 22 21:46:26 message-server-2 ERR Error handling message 'operation-result' for computer 104: {'status': 6, 'timestamp': 1471901963, 'result-text': u'Mon Aug 22 21:39:23 UTC 2016\n', 'api': '3.3', 'operation-id': 533, 'type': 'operation-
That was about an activity that had been delivered already, but did not exist in the restored DB.