Splunk Notes: Difference between revisions

From Federal Burro of Information
Jump to navigationJump to search
No edit summary
No edit summary
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
list available indexes:
| eventcount summarize=false index=* | dedup index | fields index


  host="10.35.12.1" | stats count by action, host
  host="10.35.12.1" | stats count by action, host
Line 5: Line 9:


  host="10.35.12.1" | stats count by src_country
  host="10.35.12.1" | stats count by src_country
host="10.35.12.1" | stats count by srccountry


log lines by time  
log lines by time  
Line 18: Line 24:
  index=_internal source=*metrics.log splunk_server="*" group="per_index_thruput" | eval MB=kb/1024 | chart span=1d sum(MB) over _time
  index=_internal source=*metrics.log splunk_server="*" group="per_index_thruput" | eval MB=kb/1024 | chart span=1d sum(MB) over _time


hosts that have not logged in recent time:
| metadata index=* type=hosts | eval age = now()-lastTime | where age > (1) | sort age d | convert ctime(lastTime) | fields age,host,lastTime
where age > (1)
the “(1)” is in seconds
can calculate days in seconds by input value (4*86400)
86400 seconds in 1 day
what are users searching for ?
index=_audit action=search search=* | table _time, search, user
== using lookups ==
== using lookups ==


Line 61: Line 80:
* $8 = path to a file where raw results of this search are located (as opposed to passing the actual results into the ticket--this could be a lot of data).   
* $8 = path to a file where raw results of this search are located (as opposed to passing the actual results into the ticket--this could be a lot of data).   
The following example script passes the reason the script was triggered, a link to the saved search, and the path to the search results file into the ticket that the <code>generateRemedyTicket</code> Remedy script creates when it's run.
The following example script passes the reason the script was triggered, a link to the saved search, and the path to the search results file into the ticket that the <code>generateRemedyTicket</code> Remedy script creates when it's run.
== System maintenance ==
=== open file handles ===
Support case: 234450
Per your questions about what splunk asks for in terms of ulimit -n , we usually recommend at minimum 10240.
http://blogs.splunk.com/2011/11/21/whats-your-ulimit/
http://answers.splunk.com/answers/13313/how-to-tune-ulimit-on-my-server.html
As to why do file descriptors get eaten over time ?  This is because splunk uses the file descriptors to help monitor files.  If you are monitoring a large amounts of files but many files stop getting written to over time , you should consider using two things to help with file descriptor usage
* add whitelist of blacklist to only monitor relevant files
* use ignoreolderthan , in your inputs.conf to stop monitoring files older than a certain age , both of these features are detailed in the following doc
http://docs.splunk.com/Documentation/Splunk/6.1/Admin/Inputsconf
If you want to see how many files splunk is monitoring at once, you can use
$SPLUNK_HOME/bin/splunk list monitor
[[Category:Computers]]

Latest revision as of 15:20, 12 March 2021

list available indexes:

| eventcount summarize=false index=* | dedup index | fields index
host="10.35.12.1" | stats count by action, host

Fortigate by country:

host="10.35.12.1" | stats count by src_country
host="10.35.12.1" | stats count by srccountry

log lines by time

host="10.35.12.161" | chart count by _time

grep -v

host="10.35.12.161" NOT "slapd"

log data per day:

index=_internal source=*metrics.log splunk_server="*" group="per_index_thruput" | eval MB=kb/1024 | chart span=1d sum(MB) over _time

hosts that have not logged in recent time:

| metadata index=* type=hosts | eval age = now()-lastTime | where age > (1) | sort age d | convert ctime(lastTime) | fields age,host,lastTime

where age > (1)

the “(1)” is in seconds
can calculate days in seconds by input value (4*86400)
86400 seconds in 1 day

what are users searching for ?

index=_audit action=search search=* | table _time, search, user

using lookups

host="myserver" | lookup lookup_priority_FacilitySeverity priority as priority OUTPUT FacilitySeverity as FacilitySeverity | search FacilitySeverity!="Local4.Debug" | chart count by _time , FacilitySeverity  span=1h

where a csv file was created :

priority,FacilitySeverity
0,Kernel.Emergency
1,Kernel.Alert
2,Kernel.Critical
3,Kernel.Error
4,Kernel.Warning
5,Kernel.Notice
6,Kernel.Info
7,Kernel.Debug
8,User.Emergency
9,User.Alert
10,User.Critical

and uploaded to splunk and complete with lookup definitions.


sending alerts to scripts

http://wiki.splunk.com/Community:Use_Splunk_alerts_with_scripts_to_create_a_ticket_in_your_ticketing_system

To do this, set up your saved search, put it on a schedule, and set the action to trigger a shell script you've written whenever the number of events you're interested in is> 0.

Put your script (not the Remedy script) in /opt/splunk/bin/scripts.

This script should call the Java program that Remedy uses to generate tickets and pass it data from the Splunk alert. Splunk alerts support the following variables:

  • $1 = number of events returned
  • $2 = search terms
  • $3 = fully qualified search string
  • $4 = name of the saved search
  • $5 = the reason the action/script was triggered (for example, the number of events returned was >1)
  • $6 = a link to the saved search in Splunk +
  • $7 = a list of the tags belonging to this saved search (this option was removed starting in Splunk 3.6)
  • $8 = path to a file where raw results of this search are located (as opposed to passing the actual results into the ticket--this could be a lot of data).

The following example script passes the reason the script was triggered, a link to the saved search, and the path to the search results file into the ticket that the generateRemedyTicket Remedy script creates when it's run.

System maintenance

open file handles

Support case: 234450 Per your questions about what splunk asks for in terms of ulimit -n , we usually recommend at minimum 10240.

http://blogs.splunk.com/2011/11/21/whats-your-ulimit/

http://answers.splunk.com/answers/13313/how-to-tune-ulimit-on-my-server.html

As to why do file descriptors get eaten over time ? This is because splunk uses the file descriptors to help monitor files. If you are monitoring a large amounts of files but many files stop getting written to over time , you should consider using two things to help with file descriptor usage

  • add whitelist of blacklist to only monitor relevant files
  • use ignoreolderthan , in your inputs.conf to stop monitoring files older than a certain age , both of these features are detailed in the following doc

http://docs.splunk.com/Documentation/Splunk/6.1/Admin/Inputsconf

If you want to see how many files splunk is monitoring at once, you can use

$SPLUNK_HOME/bin/splunk list monitor