So you’ve gotten an instance of intrigue-core up and running using the AMI or Docker guide, but what now!? Give scans a try. Here’s now.

Create a new project, let’s run this one on Mastercard (They run a public bounty on Bugcrowd):

create_project

Now, run a “Create Entity” task to create a DnsRecord with the name “mastercard.com”.

This time, however, let’s set our recursive depth to 3. This will tell the system to run all viable tasks when a new entity is created, recursing until we reach our maximum depth:

iteration.jpg

Hit “Run Task” and you’ll see that our entity was successfully created:

create_entity.jpg

Now, let’s browse to the “Results” tab and get an overview of the “Autoscheduled Tasks” that have been kicked off automatically:

results-autoscheduled

Wow, 83 tasks in just a few seconds! Core is FAST, thanks to Sidekiq and Sequel. Now we can browse over to the “Graph” tab, and get an overview of the entities (nodes) and the tasks (edges) that created them.

mastercard

Note that the graph is generated every time you load the page, so you will need to refresh a couple times to get the graph to show. You can zoom in and out to get details on the nodes:

zoom-graph.jpg

Browsing over to the “Dossier”, you can see that there’s some fingerprinting happening on the webservers, based on the page contents. Note that there’s nothing invasive happening here, this is simply just doing page grabs and analyzing the results:

dossier-2

One neat feature is that core actually parses web content – including PDFs and other file formats to pull out metadata. More to come on this!

All this in just a few minutes: attack_surface

 

To get started with intrigue-core using Docker, you’ll need to install Docker on your machine.

Next, pull down the intrigue-core repository to your local machine with a git clone:

$ git clone https://github.com/intrigueio/intrigue-core
$ cd intrigue-core
$ docker build .
$ docker run -i -t -p 7777:7777 

This will start postgres, redis and the intrigue-core service, giving you output that looks like the following (shortened for brevity):

Starting PostgreSQL 9.6 database server                                                                                                                                                           [ OK ] 
Starting redis-server: redis-server.
Starting intrigue-core processes
[+] Setup initiated!
[+] Generating system password: hwphqlymmpfrqurv
[+] Copying puma config....
[ ] File already exists, skipping: /core/config/puma.rb

* Listening on tcp://0.0.0.0:7777
Use Ctrl-C to stop

As it starts up, you can see that it generates a unique password. You can  now log in with the username intrigue and the password above at http://localhost:7777 on your host machine!

Now, to kick the tires, create a project:

Intrigue Core 2017-03-07 00-14-55

Now that you have a new project, let’s run a single task:

service bruteforce

This will give us lots of interesting things to look into:

results

And we can click on any of these and “iterate” with a new task. Let’s iterate on that first DnsRecord, sip.microsoft.com. Click on it and you’ll see all of the entity’s details, as well as the task runner below, where we can select from all tasks that can run on a DnsRecord entity.

iterate

In this case, we select “nmap scan” and hit “Run Task”, allowing us to iterate further!

iterate-further

Keep going, and see what you can discover.

Note that many tasks require an API key – which you configure in the “Configure” tab. Each listing has a handy link to the configuration, making it easy to provision an API key:

configure

 

Don’t forget to check out the Dossier and Graph views to show you a listing of all entities and a graph of all entities respectively:

graph

Oh, and by clicking on a node, you can get a bit more info in the HUD!

graph HUD

That’s it for now. Have fun! Please jump in the Gitter channel if you have troubles or want to learn more.

UPDATE: The latest test image can be found by searching ‘intrigue-core-edge’ in Community AMIs. It is currently only available in the Northern Virginia region on EC2.

I’ve made an EC2 instance available for testing if you’d like a simple way to try it out. Here’s a simple demo of how to get started.

Great reading from our friends at the CIA:

Not only are open sources increasingly accessible, ubiquitous, and valuable, but they can shine in particular against the hardest of hard targets. OSINT is at times the “INT” of first resort, last resort, and every resort in between.

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol48no3/article05.html

Intelligence Gathering, Reconnaissance, Information Collection… No matter what you call it, it’s an important component of any security assessment project.

Intelligence Gathering:  The collection of intelligence both overt and covert to aid in the decision of a course of action.

Intelligence Gathering (IG) is often viewed and approached as the first step of an assessment project. A penetration tester will diligently scan the target’s website, gather DNS information, check Google for email addresses and they might even check SHODAN for exploitable hosts.

Unfortunately, this is often where the Intelligence Gathering stops. The assessor now has enough information to move on to the “Active Scanning” or “Exploitation” phases, suddenly ignoring that they will need to continuously perform IG on new information throughout an assessment.

… So what is is Intelligence Gathering at it’s core? There are a number of recognized disciplines within the scope of Intelligence Gathering. The most reconizable of these is Open-source intelligence (OSINT), or Intelligence Gathering performed on publicly available sources. In the Intelligence Community (IC), the term “open” refers to overt, publicly available sources (as opposed to covert or clandestine sources);

We often focus on OSINT, but there are others such as SIGINT and HUMINT that are often left untouched when assessing security of an entity since they may not be relevant, in scope, or within the control of the entity that commissioned the assessment.

The process can be difficult to scope – until you’ve gained enough information to capture your goal, you’ll continue to gather intelligence and analyze it, filtering it into a model of the target. “Enough” IG largely depends on the goals of the application for which its used. If you’ve not been successful at gaining your target, then you have more to do.

Performing Intelligence Gathering at scale can also be challenging. A small business or organization can consist of thousands of entities which may, or may not be relevant during an assessment. An enterprise, made up of thousands, if not millions of entities and the relationships between them is simply mind-boggling and impossible to process with traditional techniques. This is truly a “big data” problem.

Our mission is to make Intelligence Gathering and Analysis simple, and support the assessment efforts of security professionals.