This has been driving me batty trying to figure this out. not seeing anything that should cause an issue, but out of the box opensearch seems to not work. And the stuff below is after making sure /usr/local/etc/opensearch and /var/db/opensearch are removed prior to reinstalling to make sure the is nothing there that could cause an issue with the testing.
Got this in rc.conf...
And from `/var/log/opensearch/opensearch.log` it looks like it should be starting and good.
And it is definitely listening on that port, but it just times out...
Got this in rc.conf...
Code:
opensearch_enable="YES"
opensearch_flags="-Enetwork.host=127.0.0.1 -Ediscovery.type=single-node -Eplugins.security.disabled=true"
opensearch_env="OPENSEARCH_INITIAL_ADMIN_PASSWORD=admin"
And from `/var/log/opensearch/opensearch.log` it looks like it should be starting and good.
Code:
[root@🌐nibbles0]head| main[$]🧙took 6m3s 🧱130>tail -n 20 /var/log/opensearch/opensearch.log
[2026-03-22T13:04:31,585][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating autoThrottle for index .plugins-ml-config from [true] to [true]
[2026-03-22T13:04:31,605][INFO ][o.o.c.r.a.AllocationService] [nibbles0.vulpes.vvelox.net] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.plugins-ml-config][0]]]).
[2026-03-22T13:04:31,615][INFO ][o.o.m.e.i.MLIndicesHandler] [nibbles0.vulpes.vvelox.net] create index:.plugins-ml-config
[2026-03-22T13:04:31,630][INFO ][o.o.m.c.MLSyncUpCron ] [nibbles0.vulpes.vvelox.net] ML configuration initialized successfully
[2026-03-22T13:04:41,543][WARN ][o.o.c.InternalClusterInfoService] [nibbles0.vulpes.vvelox.net] No resource usage stats available for node: nibbles0.vulpes.vvelox.net
[2026-03-22T13:05:04,787][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating autoThrottle for index top_queries-2026.03.22-25375 from [false] to [true]
[2026-03-22T13:05:04,787][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating maxThreadCount from [0] to [4] and maxMergeCount from [0] to [9] for index top_queries-2026.03.22-25375.
[2026-03-22T13:05:04,787][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Initialized index top_queries-2026.03.22-25375 with maxMergeCount=9, maxThreadCount=4, autoThrottleEnabled=true
[2026-03-22T13:05:04,787][INFO ][o.o.p.PluginsService ] [nibbles0.vulpes.vvelox.net] PluginService:onIndexModule index:[top_queries-2026.03.22-25375/UjK68cDXSvO54NOeqGaGVQ]
[2026-03-22T13:05:04,789][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating autoThrottle for index top_queries-2026.03.22-25375 from [true] to [true]
[2026-03-22T13:05:04,792][INFO ][o.o.c.m.MetadataCreateIndexService] [nibbles0.vulpes.vvelox.net] [top_queries-2026.03.22-25375] creating index, cause [api], templates [], shards [1]/[1]
[2026-03-22T13:05:04,793][INFO ][o.o.c.r.a.AllocationService] [nibbles0.vulpes.vvelox.net] updating number_of_replicas to [0] for indices [top_queries-2026.03.22-25375]
[2026-03-22T13:05:04,807][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating autoThrottle for index top_queries-2026.03.22-25375 from [false] to [true]
[2026-03-22T13:05:04,807][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating maxThreadCount from [0] to [4] and maxMergeCount from [0] to [9] for index top_queries-2026.03.22-25375.
[2026-03-22T13:05:04,807][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Initialized index top_queries-2026.03.22-25375 with maxMergeCount=9, maxThreadCount=4, autoThrottleEnabled=true
[2026-03-22T13:05:04,807][INFO ][o.o.p.PluginsService ] [nibbles0.vulpes.vvelox.net] PluginService:onIndexModule index:[top_queries-2026.03.22-25375/UjK68cDXSvO54NOeqGaGVQ]
[2026-03-22T13:05:04,808][INFO ][o.o.i.MergeSchedulerConfig] [nibbles0.vulpes.vvelox.net] Updating autoThrottle for index top_queries-2026.03.22-25375 from [true] to [true]
[2026-03-22T13:05:04,827][INFO ][o.o.c.r.a.AllocationService] [nibbles0.vulpes.vvelox.net] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[top_queries-2026.03.22-25375][0]]]).
[2026-03-22T13:09:11,396][INFO ][o.o.j.s.JobSweeper ] [nibbles0.vulpes.vvelox.net] Running full sweep
[2026-03-22T13:09:11,549][INFO ][o.o.i.i.PluginVersionSweepCoordinator] [nibbles0.vulpes.vvelox.net] Canceling sweep ism plugin version job
[root@🌐nibbles0]head| main[$]🧙🟢0>grep 9200 /var/log/opensearch/opensearch.log
[2026-03-22T13:04:11,558][INFO ][o.o.h.AbstractHttpServerTransport] [nibbles0.vulpes.vvelox.net] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[root@🌐nibbles0]head| main[$]🧙🟢0>
And it is definitely listening on that port, but it just times out...
Code:
[root@🌐nibbles0]/usr/local/etc|🧙🟢0>ncnetstat -l -U opensearch
Proto User PID Local Host L Port Remote Host R Port State
tcp6 opensearch 50759 127.0.0.1 9300 * * LISTEN
tcp6 opensearch 50759 127.0.0.1 9200 * * LISTEN
[root@🌐nibbles0]/usr/local/etc|🧙🟢0>curl -u admin:admin [URL]http://127.0.0.1:9200[/URL]
curl: (28) Failed to connect to 127.0.0.1 port 9200 after 75006 ms: Could not connect to server