Operations Analytics User Discussions
cancel

ops-collector debug

SOLVED
Go to solution
Acclaimed Contributor.. Harald Acclaimed Contributor..
Acclaimed Contributor..

ops-collector debug

I have setup in OMi/Agent:

  • Generic Output from Structured Log File
  • Data Forwarding Policy

Goal is to stream event data to OBA. In OBA I cannot see/search the data/logs. I think I have setup everything according to the OBA docs... (Stream logs with Operations Agent)

How can I verify if the data arrives in OBA? Any logfiles or flowtrace information would be helpful.

1 ACCEPTED SOLUTION

Accepted Solutions
Micro Focus Expert
Micro Focus Expert
Solution

Re: ops-collector debug

Ideally, it would be a good practice to get OA log streaming working prior to hardening the environment. In a hardened env, you can still use the kafka consumer CLI to show the arrivals however kafka needs to have the certificate information. The command line run on the collector would then look something like this:

/opt/HP/opsa/kafka/bin/kafka-console-consumer.sh --bootstrap-server <yourobacol>:9092 -topic opsa_default.opsa.data.log.1.0.di.input --consumer.config /tmp/kafkassl

I can't give you the contents of that SSL file but the format looks like this:

security.protocol=SSL
ssl.keystore.location=/opt/HP/opsa/conf/ssl/opsa-keystore.jks
ssl.keystore.password=
ssl.key.password=
ssl.truststore.location=/opt/HP/opsa/conf/ssl/opsa-truststore.jks
ssl.truststore.password=

If you contact MF Support they should be able to tell you the default passwords if you have not changed them.

also FYI: https://docs.microfocus.com/itom/Operations_Bridge_Analytics:3.04/Hardening/Parts/ssl_intro/SSL_Logstreaming

 

4 REPLIES
Micro Focus Expert
Micro Focus Expert

Re: ops-collector debug

If you are implementing SSL, things get a lot more complicated and the documentation can be confusing. Assuming you are not working in a hardened env, here are a few things I look at:

- Do a "ovpolicy -list" to make sure the 2 policies have been applied to your agent, and then view /var/opt/OV/log/System.txt ...sometimes it shows problems sometimes not.

- Monitor the communication traffic by running something like "tcpdump -A -i any port 9444" on the OA system and/or the OBA collector system.

- You can also see the json messages arriving by running this command on the OBA collector as the opsa user: "/opt/HP/opsa/kafka/bin/kafka-console-consumer.sh --bootstrap-server <obacollectorhostname>:9092 --topic opsa_default.opsa.data.log.1.0.di.input" you can also try adding the --from-beginning option.

- If you see the json OK but still not seeing the data in OBA GUI, try looking at recently updated /opt/HP/opsa/log/opsa-storm/*log files for errors.

- If you can't get your custom policies to work right, try editing the Sys_Dataforwarding policy to send to your OBA collector then update and apply the System Log collection aspect to your target system. Logging in and out of the system or using commands like "logging test" should cause /var/log activity that this OBM policy will pick up. You should see it in OBA when you do a search for: Text: * | where source_type="PlainLog"

hope this helps

dougg

 

Acclaimed Contributor.. Harald Acclaimed Contributor..
Acclaimed Contributor..

Re: ops-collector debug

@dougg

first thanks for quick reply.
Tcpdump shows "something" but I cannot see any output here: /opt/HP/opsa/kafka/bin/kafka-console-consumer.sh...

2018-11-08 13:36:15 WARN [Controller-8-to-broker-8-send-thread] RequestSendThread:91 - [Controller-8-to-broker-8-send-thread], Controller 8's connection to broker 192.168.111.75:9092 (id: 8 rack: null) was unsuccessful
java.io.IOException: Connection to <collector-host>:9092 (id: 8 rack: null) failed
        at kafka.utils.NetworkClientBlockingOps$.awaitReady$1(NetworkClientBlockingOps.scala:84)
        at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:94)
        at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:232)
        at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:185)
        at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:184)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

 

Is there anything else I can debug? I think the data does not get to the connector somehow (or it gets declined...)

Micro Focus Expert
Micro Focus Expert
Solution

Re: ops-collector debug

Ideally, it would be a good practice to get OA log streaming working prior to hardening the environment. In a hardened env, you can still use the kafka consumer CLI to show the arrivals however kafka needs to have the certificate information. The command line run on the collector would then look something like this:

/opt/HP/opsa/kafka/bin/kafka-console-consumer.sh --bootstrap-server <yourobacol>:9092 -topic opsa_default.opsa.data.log.1.0.di.input --consumer.config /tmp/kafkassl

I can't give you the contents of that SSL file but the format looks like this:

security.protocol=SSL
ssl.keystore.location=/opt/HP/opsa/conf/ssl/opsa-keystore.jks
ssl.keystore.password=
ssl.key.password=
ssl.truststore.location=/opt/HP/opsa/conf/ssl/opsa-truststore.jks
ssl.truststore.password=

If you contact MF Support they should be able to tell you the default passwords if you have not changed them.

also FYI: https://docs.microfocus.com/itom/Operations_Bridge_Analytics:3.04/Hardening/Parts/ssl_intro/SSL_Logstreaming

 

Highlighted
Acclaimed Contributor.. Harald Acclaimed Contributor..
Acclaimed Contributor..

Re: ops-collector debug

thanks for the help. I will retry using http only.