Sunday, September 30, 2007

Edge based network management and Event tracing for Windows

Reading another great post from TaoSecurity, this time about Microsofts Anemone project which is an abitious network and systems monitoring system, using network end points.
Anemone is investigating network and systems management from the edges of the network, initially focusing on enterprise network management. It aims to build a network management platform based around two main components: (i) endsystem flow monitoring, providing the inputs to the system; and (ii) monitoring of the network routeing protocols, providing current system configuration. By aggregating and querying these data sources in a distributed fashion, Anemone will provide a platform on which network management applications can be built to provide tools for visualization, what-if analysis, and control of the network.

While edge based approach seems interesting, it is research and out of my leage. I leave it to experts like to give a review at a later time :-) A review I will read with much interest if and when it arrives!

Anyway, the original post mentioned use of event tracing for Windows:
To evaluate the per-endsystem CPU overhead we constructed a prototype flow capture system using the ETW event system [Event Tracing for Windows]. ETW is a low overhead event posting infrastructure built into the Windows OS, and so a straightforward usage where an event is posted per-packet introduces overhead proportional to the number of packets per second processed by an endsystem.
It sounded intesting, going on to the Microsoft website explanation:
Event tracing is a technique for obtaining diagnostic information about running code without the overhead of a checked build or use of a debugger. An event represents any discrete activity that is of interest, especially with respect to performance.

Developers can implement event tracing in a driver by using the Microsoft Windows software trace preprocessor (WPP). WPP software tracing in kernel-mode drivers supplements and enhances Windows Management Instrumentation (WMI) event tracing by adding conventions and mechanisms that simplify tracing the operation of a driver. WPP event tracing is implemented by adding certain C preprocessor directives and WPP macro calls to the driver source code. During an event tracing session, WPP logs real-time binary messages that can subsequently be converted to a human-readable trace of driver operations.

This is interesting, but for the developer, with source code access.

I dont see how Event tracing can help an administrator trace and debug events on servers or clients. Perhaps I am mistaken?

Friday, September 28, 2007

IRC bots and announcements

I am still in some IRC channels where it would make sense to have announces from work related stuff, eg:
  • Nagios monitoring alerts.
  • Subversion commit messages. And other post/pre-commit hooks.
In an ONLamp article about subversion and traq on FreeBSD, there is also an example with a irc announce using RSS feeds, which is implemented with http://supybot.com/. This seems interesting, maybe it is useful for my other needs, I will have to check it out.

On top of the Nagios alert messages, it would be nice to be able to send a query from an IRC bot to the Nagios service to get current state of a service monitor.

Wednesday, September 26, 2007

Windows 2008 RC 0 and IIS 7 tips

A few days ago Windows 2008 RC 0 was announced! I will not have time to test it any time soon, but it is a reminder that Windows 2008 will arrive soon, expected already in february 2008!

It will be a nice signal to send to your customers that "We are now testing Windows 2008, IIS7". This will bring momentum to the later post of "Your services is now running on Windows 2008". You will look professional, technical and "process" strong, spending time to prepare and test services early and hopefully thoroughly on Windows 2008! Add to it, that most everyone will agree there was a big improvement going from Windows 2000 to 2003, so your services will benefit from an early, and tested, adoption of Windows 2008!

Reading about Windows 2008, I stumbled at the IIS community website, where there are very interesting articles, for example of how to get Frontpage 2002 running on IIS7, and a pointer to a IIS debugging tool for locating problems with IIS applications crashing etc.

Tuesday, September 25, 2007

Firefox as a security tool

I saw this amazing collection of security plugins for Firefox, called FireCAT. I havnt had time to install the collection, but I hope I get a chance soon :-)

Monday, September 24, 2007

Compact server and a laptop for client computer

Recently I have been preparing my IBM R 60 laptop for network and server administration work, while at the same time keeping it at a functional client level. My conclusion ended at PC-BSD 1.4 a while back, and I have not regrettet that. I can reusage my server automation and administration setup and scripts, and I can use it as a real laptop client computer. Of course using it as a client does violate my own feeling of security, as it has so much installed that I dont use. But it is a good base for my hobby automation and administration projects.

I do feel the combination of serverusage and client on same installation, is a bit opposite and not really good for all future. So I was really pleased to see my favorite blog and book author having an article describing a compact server type computer for his network security monitoring. That setup looks very nice, is AMD based, with lots of disk and expansion options, and he got FreeBSD installed without a problem.

So, when I get the chance to split client usage from server usage, I know what I will get :-) But with the amount of time I have for home server and security projects at the moment, I will stick with my laptop for both server and client computing for a while :-)

Oh, as a bonus Richard reminds his readers of Gconcat, in case that article was missed. Just awesome blogging, I love it :-)

Wednesday, September 19, 2007

Desktop heap and GDI objects, usage and monitoring


When working with Lotus Notes and many IExplorer windows you might run into problems with random applications that will not open a new window or Windows will even throw an error:


Initialization of the dynamic library \system32\[kernel32user32.dll] failed. The
process is terminating abnormally.
On my normal laptop there is no problem yet, here is my current physical and free memory:

wmic MEMLOGICAL get TotalPhysicalMemory

TotalPhysicalMemory2087256

wmic OS get FreePhysicalMemoryFree

PhysicalMemory1437220

The taskmgr shows this:


So really, I am not in any problems yet! But if you do have the problem someone wrote about a fix for running out of GDI objects. He describes how it is actually a problem with the desktop heap settings and links to Windows Desktop Heap Tweak Guide and Microsofts own description of the problem.

To sum the solution it should be fixing this registry key:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
Manager\SubSystems\Windows
My default setting is:
%SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows
SharedSection=1024,3072,512 Windows=On SubSystemType=Windows ServerDll=basesrv,1
ServerDll=winsrv:UserServerDllInitialization,3
ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off
MaxRequestThreads=16
Possible change:

Windows SharedSection=1024,3072,512

to:

Windows SharedSection=1024,8192,2048

I am not currently aware of the number of GDI objects where i will run into problems, but i hope to get an example from my collegue.

For the future I am interested in knowing how many GDI objects are created and where in these two cases, using a specific application from Citrix:
  • GDI object count on client, when application running on client

  • GDI object count on citrix server and client, when application is started from Citrix WI

I would like some util to monitor and alert on GDI object usage. Some ideas:
  • Task manager, add the column, this is easy.

  • Maybe Process Monitor, now maintained by Microsoft, can be used? MISSING.

  • WMI should be possible, but i have not found the path or alias to use in wmic:MISSING

  • Add counter in perfmon: MISSING INFO
My other notes during all this reading:

Overview of Performance Monitoring seems to show that GDI objects are not monitored in system monitor. I dont know if this is correct.

This article could be interesting to learn from: How to Use Remote Tools to Track Memory Leaks in Windows CE Applications.

EDIT 1:
I am still looking for tools for GDI object monitoring:
http://www.google.dk/search?hl=da&client=firefox-a&rls=org.mozilla%3Ada%3Aofficial&hs=m2Z&q=wmi+class+for+gdi+objects&btnG=S%C3%B8g&meta=

I am not the only one missing GDI count for a process:
http://www.ureader.com/message/33360788.aspx

There is a monitoring tool, Usage Monitor 1.8.0.3, I tried it, but you can only put a watch limit for one process, not a total limit. But watches can be placed on: Memory Usage, GDI Objects, and USER Objects.

679F88EA6D30D0035E26EC5B88E64063 umon-1.8.0.3.zip

Another monitor tool:
http://www.mmdfactory.com/logger.html

EDIT2:
Another fix was suggested by a collegue:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows]"GDIProcessHandleQuota"=dword:00007530

To check it from batch file, which runs regedit /s file-with-above-content:
Set ExeError=%ErrorLevel%
If .%ExeError%==.0 Set RCValue=%ExeError%

Another way to check GDI object usage was suggested: process explorer.




Sunday, September 16, 2007

Avoid make install services, and ideas for best practise IT administration

In the past I have seen a more or less make install service installation of OpenLDAP as a Samba backend service. Unfortunately there was invested way too little effort on getting a feeling of what the OpenLDAP service was actually doing. The installation was missing basic testing functionally, monitoring, redundancy and missing continuing upgrading. If you do manage to get time or collecting proper knowledge, make sure you store your findings somewhere useful for yourself and your collegues. You might get inspired by my thoughts of knowledge management and single point of entry for search.

I fear there are many IT departments that still perform IT operation as single individuals, not sharing knowledge and sticking with make install service installation. What puzzles me about this picture is how anyone working professionally with IT administration can be satisfied with just make install installations, let alone how can their boss let it happen in their IT department.

From my years of IT administration I have come think of an IT service as something which needs much more than make install! Off my head I can think of at least issues if someone asks me for ideas for a best practise IT adminstration:
  • service usage understanding(at least basic)
  • redundancy and availability (high)
  • security issues, impacts
  • installation, dependancies
  • monitoring, logging, baseline for behaviour and files used
  • performance, baseline and tuning
  • backup/restore
  • perform cases of most likely actions, eg. add/remove/change/stop/start
  • upgrades, minor and major, possibly backup->install/upgrade->restore
  • locate community wikis, forums and announce mailing list
  • let someone else setup a complete test environment, following the intial docs
  • make some (initial) support scripts and docs, which everyone can commit to in the future... knowledge sharing!
All is part of an IT service, and most likely it wont be the same single person performing all aspects for ever, so knowledge sharing is paramount. It may sound like going for the impossible, but I have seen it work out just fine, and to the pleasure of everyone! It makes a great feeling for everyone when everyone can contribute.. it just enforces the good feeling and good work of the department! So keep striving, if you, your boss and your colleguages really want it, you will succeed!

Well anyways, what got me thinking about all this today, was an OpenLDAP post over at OnLamp, which mentions lastest OpenLDAP upgrades, version 3, and a rundown of how to make an OpenLDAP installation redundant. The last part was particular interesting as it mentions syncrepl as superior to slurpd, since OpenLDAP version 2.2:

In the late 1990s, a new feature called Content Synchronization (see
RFC 4533) offered a new basis for replication. In OpenLDAP 2.2, the project introduced synchronization replication (syncrepl) based on persistent search. syncrepl uses change sequence numbering and is a pull approach by the replica server. It is much more robust replication approach and more forgiving when replica servers lose connectivity.

I have seen problems with citrix access gateway (CAG) logon failures due to a missing OpenLDAP upgrade, and I have also seen non-working OpenLDAP slurpd replication.

All together, it confirms me in my intial point: avoid make install service installations, and spend more time with your IT service, it will most likely come back in terms of better operation, service and support from your service!

Thursday, September 13, 2007

Storm growth and botnets in general

I am not much involved (recent job changes) with highly exposed servers and services (webhosting) anymore so my time spent eg. fighting spam (botnets play an huge role here of course) has decreased.

After the recent job changes, my interest in security has not decreased, but focus seems to have moved, more toward intrusion/extrustion detection and penetration testing. I hope I can keep exploring that path, with some interesting posts here.

But anyway, after reading about the continuing Storm botnet growth, I wanted to take some of my old notes and urls to this blog. Here is a quick list of good urls to get you started with botnets in general:
And some posts about the Storm botnet structure, growth and operation:

Google analytics

Yesterday I mentioned a wish for statistics on this blog, and I came to think of Google Analytics (GA). I have never used GA before, but when I was involved in website hosting I came across Urchin, because I was testing the Urchin FreeBSD port. Later Urchin was acquired by Google and is now GA. Urchin was superior to any other website statistics software i played with back then, so I look forward to seeing GA in action!

Activating GA was very easy, I used my existing Google account and pasted the javascript code into my blog template, just above the body-end html tag. That was it! And you can add more site watches, and administer them all from one GA account!

According to a few google searches, and a user comment on a blog belong to guy involved in developing GA, it does not seem possible to add GA to flickr sites! Like that user I find this annoying, and it could drive me for a switch to Picasa, which most like will get GA functionality before flickr!

Search + single point of entry + availability = succesful knowledge management!

In my very first post on this blog i mentioned the importance of search systems/capabilities when you want to have a successful knowledge management system. Forget about categories or sorting and agreeing to one format for all knowledge, I predict it will not work for you if you go down that road! Instead think multiple systems and formats for storage, and focus on single point of entry for availability and search! Does this sound like something you know? Google! It is not without reason that Google "won" when compared to old indexing search sites!

I have two agendas for this search tool/search system/search engine investigation: I am looking for something useful for an enterprise and on the other hand I want to check out the open source posibilites so I can have something to play with at various home/friend projects! The main differences is money and how many systems/data sources the search can crawl/index and interface to. No matter which agenda you have, you should be able to get inspired from this list of requirements:
  • Index ViewVC websites, which can be protected by shared login credentials.
  • Crawl text and pdf documents on websites.
  • Must scale well for many documents!
  • Must be gentle/tunable and handle errors gracefully.
And this nice-to-have feature list:
  • Administration of who(users, public, ip-based) has access to search information from different sources.
  • Index/search multimedia formats, pictures and video, similar to Blinkx and Google images.
  • Handle searches with foreign charsets, eg. danish æøå.
  • Crawl docs on FTP sites, eg. anonymous login.
  • Crawl new and old Microsoft Office documents, such as Word, Excel and Powerpoint.
  • Crawl Windows shares.
  • Crawl WebDav.
  • Interface to and crawl Microsoft Sharepoint sites.
  • Interface to and crawl Lotus Notes databases, at least through web enabled databases.
I started by looking at Creating Google Custom Search Engines (Google CSE) and Google Custom Search Business Edition (CSBE). These are not free services for the requirements I have, so I have decided not to spend more time with these. Snips from the Google CSBE website:
Custom Search Business Edition is great for public websites that have a lot of web-based content that needs to be easily searchable.

Google desktop search also does not fit what I am looking for, so moving on.

Google has some other products which looks very interesting, Google search appliance (GSA) and Google OneBox. OneBox can supposedly interface to many systems (CRM, ERP, etc) and you can get your own developed module. Take a look at the different GSA products, or use the feature matrix for the different versions of GSA. GSA or OneBox is definately very interesting, especially for the large enterprise, who might want to save ressources and spent some money to get what is probably the best search tool in the world! But I dont have any of those Google tools availble to me right now and probably never will, at least not for private or community usage!

So I kept searching ;-) and I quickly became fond of the incredible details and amount of information available at Search Tools for Web Sites and Intranets (http://www.searchtools.com/).

I found several open source search tools which seems to fit a fair amount of my requirements and nice-to-have features above, so I would like to give the folllowing a try: OpenWebSpider, ASPSeek, mnoGoSearch, DataParkSearch and Swish-E.

It was not crystal clear to me which could in fact index Microsoft Office files, but at least Swish-E and ht://Dig seemed capable.

I owe to say that I tend to stay away from Java and PostgreSQL based systems as I have little or no experience in running those for a while!

Some of the open source search tools are available in the FreeBSD ports collection (of which I am a huge fan) so those will be the ones I test: DataParkSearch, Swish-E and mnoGoSearch!

Other urls I visited during this initial search tool investigation:
http://www.searchenginewatch.com

What is a search engine? http://www.techweb.com/encyclopedia/defineterm.jhtml;jsessionid=DB3VMBYCAINF4QSNDLRCKHSCJUNN2JVN?term=search+engine

Read about Wikia, see:
http://www.informationweek.com/blog/main/archives/2007/08/will_google_be_1.html

Wednesday, September 12, 2007

Comparing files and folders

When it comes to comparing files, I have been used to a perform this on text files only, always wondering if similar functionality is availble for Word documents or even images. The tools i have used in the past:

  • diff -y --suppress file1 file2, very usefull for checking changes to config files, eg. in scripting if you want to make only a certain number of lines are changed. Adding something like MKS Toolkit or GnuWin32 will give you similar tools on your Windows platform.
  • Total Commander has a built in compare files.
  • My favorite is WinMerge which is freeware and can recursively compare folders!
  • WinMerge can also magically replace the builtin side-by-side compare functionality of TortoiseCVS and TortoiseSVN which are my favorite Subversion and CVS version control Windows interfaces.
  • FreeCommander can compare files if you set it to use WinMerge. Without WinMerge FreeCommander can recursively compare files and folders (Syncronize).

I dont recall needing to merge a lot of files, so i dont know how the tools above will compare to Araxis Merge.

So at my new job i was pleasently surprised to learn of Araxis Merge, a new tool, that also can compare pictures! I do not have any particular use for it right now, but i am sure i will think of something :-) Unfortunately it will cost me some bucks, so i will probably stick with WinMerge for the time being.

Blog backup and statistics?

After writing my first blog post I accidently deleted parts of it! What happened was this:

I edited the blogpost, saved and published it, so far so good.

Just after saving it, i got an idea for a minor change, so I went back using the browser, to the editor windows, this was a mistake! I did not notice the text was the old text before i changed it in the first place. So when i published what i thought was a minor change, in fact the first major change was gone!

So after retyped the first major change to my posting, it came to my mind: how do i perform a backup of my blog here at blogger.com?

Also I would like to know if someone actually visits, maybe there is some stats similar to that of AWStats?

Starting a blog, handling knowledge management

Welcome to my blog, thanks for visiting!

I have started this blog to improve my knowledge management system! In short this blog will contain all information i feel like saving! For more details of the entire system, see later.

The need for an improvement to my knowledge mangement came up this month, when I got a new job! At my new job I can no longer commit/checkout my personal Subversion or CVS repositories. And I dont have access to Firefox, so I am also missing my bookmark sync-and-sort plugin!

Things you wont find here are real personal information or notes that are confidential, which will have to stay on my PC or in a special Subversion repository for that.

So to summarise my knowledge management system as of today, it consist of the following:
  • Ideas/readme/snip commits used to be saved in Subversion, but probably this will go into this blog from now on!
  • Personal scripts will still go into personal Subversion repositories, as it is easier to deploy to servers. Snippets from those will go to the blog when appropriate.
  • Howtos/working notes probably will stay in the appropriate CVS/Subversion repositories for a while. This is not optimal for sharing with more than a few people, so snippets will be in this blog!
  • Pictures go to appropriate flickr accounts: personal or family, available to anyone, family or friends.
  • Videos unfortunately can not be put into flickr. A place like flickr, with video power like youtube would be nice! Any ideas.
  • E-mail will probably move more and more into gmail, as that will hopefully be availble anywhere i ever need it.
  • The few websites i help o webmaster, are saved in a Subversion.
  • Instant messaging logs are not central or searchable, this would be nice to see.
  • I dont contribute to any particular Wiki anywhere, neither do I have one of my own.
  • I dont contribute to a particular forum, neither do I have one of my own.
  • I have not yet started using VoIP or mobile technology beyond low-tech personal use.
  • Daily top urls to visit (bookmark management) will stay in Sync-and-sort for now, but should not grow into a mess like recently. Instead i will post on this blog, including my thoughts of a particular url. I have a few ideas for better bookmark management so i dont have to use sync-and-sort.
  • Book reviews and notes will move from Subversion to this blog.

Search capabilities within all systems is of great importance, and if I was to share knowledge i tend to say good search possibility is the most important requirement of a knowledge system! Otherwise you will risk the system never gets used.

For my own setup above, a generic search across all systems is not available to me, I have to search each of the knowledge system parts in what ever I can. This is one of the reasons i prefer any format that is text based, because then at least i can grep for one word. I would really like to have a single point of entry search engine which can crawl any of the above! Limiting access to see and perform searches within certain data would be paramount! I am not aware of a product that can do this. For the enterprise at work we will take a look the Google search appliance, but for my personal usage i hope to find something similar that is available in some open source project?

The IBM quickr approach is appealing to me, at least from a coorperate knowledge sharing point of view. It seems perfect for Notes environments, but unfortunately i have not had a chance to try it out yet! I wish there was an open source alternative with similar functionality i could play with. A google search got me to Sun portal server but it 1) it might not be what i want and 2) has some pretty hard technical requirements for me to get started, so i will probably never know about the first issue.

I dont know how other technical people cope with the difficulties of handling job and personal knowledge management systems? Undoubtedly it must raise problems with regards to people loosing their notes if they change job or job position, and it goes without saying that you can not mirror work knowledge mangement systems off for your personal usage! As work and personal life keeps merging, this issue will keep popping up.