Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

Wednesday, July 06, 2011

Selenium clicking links from Safari/Opera not working

There have been lots and lots of solutions proposed for this problem. Let me add mine to the list.

Here is what I was facing: the following (very simple) link click
$link->selenium->click("//td[5]/a/img");

works perfectly in Firefox, IE and Google Chrome but results in no action in Safari or Opera. No error is generated but no change to the website occurs.

After pulling my hair out for two days, I noticed that at the bottom of the Firefox window, the link that results was displayed.


So I changed the command to
$link->selenium->open("main.php?action=adm_overview&lang=de");

And voila!...it was exactly the same as clicking on the link.

If Firefox dos not display the link (for instance, if it displays the Javascript that will be called), then try to use a tool like Webscarb to intercept the outgoing message.

Tuesday, March 22, 2011

Bromine error - Could not get handle to remote scheduler: Connection refused to host: 127.0.0.1


I pulled my hair out for a day trying to get this solved.
But first, a bit of zen: When I am stuck on a problem that seems intractable, I find it good to walk away from it, sleep on it, and then the next day, debug it some more and read the solutions I've already gone through but did not get me to the solution previously.

The error I kept getting was the following: Scheduler would not start with the error in the title of this blog. I tried running start.sh to check what the error was and that worked fine. WTF!!

The solution was something stupid that I had done.
I needed to enter a static IP address and first used network manager to try to do it which did not work. I ended up just editing the interfaces file to get the static IP address configured.

However, during the time I was messing around with network manager, my /etc/hosts file was written to with an IP address

The following line was added to the /etc/hosts file:
10.41.16.38 test-P5Q-VM #Added by NetworkManager

I just had to delete the line from /etc/hosts and then the scheduler started working.

Zen works. I say this not as a devotee as I am not one. I would like to be but I am not even close. My mind wanders so easily. It takes all my willpower just to get this post finished without moving to something else in between.


Tuesday, February 22, 2011

Solving SSL error Unable to load config info from /usr/local/ssl/openssl.cnf on Windows

Try the following:
set OPENSSL_CONF=C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/openssl.cnf
(or more generally, set it to where openssl.cnf is located)

and now the command which generated the error above should be solved

Monday, March 31, 2008

XPCOM Proxy and getting access to the GUI from non-GUI thread

It was a frustrating week of pulling out hair trying to get this to work.

First, you'll need to get nsIProxyObjectManager which is not part of the Gecko-sdk but the idl file is part of the Firefox source code. So download the Firefox source code for the Firefox version that you're using. If you are using Firefox 2.0, downloading Firefox 3.0 source code and using the idl included there will not suffice. The UID of the IDL in Firefox 3.0 is different from that of Firefox 2.0. It would've saved me half a week of frsutration had I realized this sooner.

I'm sure there are several ways to proceed from here. I found the IDL in the Firefox source, generated a header file out of it using the xpidl command from Gecko-sdk and included that in my project. You'll find that when you try to compile your project with the nsIProxyObejctManager.h inculded, it'll require other header files as well and so you'll have to find and generate header files from a few other IDLs as well. I think in total it comes out to about 10. Not too frustrating a process, but frustrating enough.

Finally, you'll need to access the method you want via the proxy
Declare the proxy object
nsCOMPtr pmgr;
pmgr=do_GetService("@mozilla.org/xpcomproxy;1",&rv);

Now, i'm trying to write to an RDF file via the proxy. So I'll declare an RDF datasource to access via the proxy
nsCOMPtr proxyObject;

And then try to get proxy access to the nsIRDFDataSource
if (pmgr){
rv = pmgr->GetProxyForObject(NS_UI_THREAD_EVENTQ, NS_GET_IID(nsIRDFDataSource), dsource, 1|4, getter_AddRefs(proxyObject));

Now we have a proxy object for calling all functions declared in nsIRDFDataSource. For instance, to call the assert, we would do
proxyObject->Assert(ns_subject, ns_predicate, ns_literal, PR_TRUE);

Its the same for any other XPCOM Interface. You'll need to get access to it via GetProxyForObject and then you can access all functions to it via the proxy as above. Hope this helped.

Sunday, March 16, 2008

XPCOM and OS X

What I had to do to get an XPCOM component written in C++ compiled and linked on OS X
1. Install Xcode
2. Install Fink
3. At command prompt
cd /sw/bin;
sudo ./apt-get install glib
sudo ./apt-get install libIDL2
sudo cp libIDL-config-2 libIDL-config

4. Download the source of the Firefox version you'll be developing for. The only steps I had to perform that weren't mentioned on
http://developer.mozilla.org/en/docs/Mac_OS_X_Build_Prerequisites
was to add the following directory to my path
/sw/bin

5. change to directory to which Firefox source was downloaded and run the following commands
cp ./obj-ff/dist/bin/libxpcom_core.dylib ./obj-ff/dist/sdk/bin
cp ./obj-ff/dist/bin/libplds4.dylib ./obj-ff/dist/sdk/bin
cp ./obj-ff/dist/bin/libplc4.dylib ./obj-ff/dist/sdk/bin

6. Follow the steps give in
http://rcrowley.org/2007/07/17/cross-platform-xpcom-a-howto/
with Makefile looking like this:

GECKO_SDK := /obj-ff/dist/sdk

DEFINE := -DXP_UNIX -DXP_MACOSX
all: xpt dylib
xpt:
$(GECKO_SDK)/bin/xpidl -m header -I$(GECKO_SDK)/idl foo.idl
$(GECKO_SDK)/bin/xpidl -m typelib -I$(GECKO_SDK)/idl foo.idl
impl:
g++ -w -c -o foo_impl.o -I $(GECKO_SDK)/include $(DEFINE) foo_impl.cpp
module:
g++ -w -c -o foo_module.o -I $(GECKO_SDK)/include $(DEFINE) foo_module.cpp
dylib: impl module
g++ -dynamiclib -o foo.dylib foo_impl.o foo_module.o -L$(GECKO_SDK)/lib \
-L$(GECKO_SDK)/bin -Wl,-executable_path,$(GECKO_SDK)/bin \
-lxpcomglue_s -lxpcom -lnspr4
clean:
rm *.o
rm foo.h
rm foo.xpt
rm foo.dylib

7. Run make. Hopefully, it should work.

Wednesday, January 09, 2008

Hyperconnected GSM-r solutions for the developing world.

GSM-r seeks to reinvent communications on board trains, between trains, and between trains and the sorrounding world. A hyperconnected solution transforming the current wired/wireless network of people communicating with one another to a network where devices are involved as well will change it from a network of people tracking the rail network to a network which that tracks the rail network. It does not remove the human element from the equation but rather changes the human from being the singular interface to one in which the human is one of many. Information gathering and sending can be done without human intervention and it will allow the network to initiate actions rather than the current situation where the human always intiates the action.
While hyperconnectivity is seen as the next step in the evolution of networks, it will allow developing countries who have not yet evolved beyond GSM and traditional wired networks to provide viable solutions to long running rail problems that have so far proved intractable due to cultural and economic issues
Countries like China and India have huge and heavily used rail networks but the additional infrastructure taken for granted in first world countries is sorely lacking, for example displays on platforms, displays in the stations to announce arrivals and departures, information gathering and dissemination on passengers, reservations, etc. Culturally and economically, it may make sense for these countries to invest in hyperconnected solutions to solve some of these issues rather than the traditional solutions deployed in first world countries. The penetration of mobile devices in these countries is large and they are among the fastest growing wireless markets in the world making hyperconnected solutions a viable solution at present and more so in the future.
Creative solutions based around trains and hyperconnectivity can be numerous taking based on cultural aind environmental factors not seen in the first world. The problems that users of trains face in developing countries are far different to those faced in the first world and the solutions offered by a hyperconnectivity can lead to a greater quality of life. For instance, on time train service is not quite the norm in the developing world. A lot of time can be spent to platforms that could be better used. A hyperconnected world would allow a train user to circumvent this by geting updates and allow the train operators to lessen rider discontemt where there currently isn't an immediately viable solution. Hyperconnected solutions will be most useful in mega-cities, think Bombay and Shanghai, where trains are a common form of transportation, commuting times are long and professions are more and more digital in nature. Any advantages that being always connected will be embraced.
In the event of an accident, a hyperconnected GSM-r solution will allow a quick response from emergency personnal who could be provided far more information than would have been previously available. A coordinated rescue and response with data that is automatically gathered would be possible and may significantly improve rescue effectiveness.
A hyperconnected world would make driverless trains safer to operate in countries where the use of a single networks are not robust enough. Use of traditional wireless networks, Wi-Fi and Wimax networks when wireless networks are out of commision and ad hoc networks would significantly increase the robustness of the network coverage available to driverless trains.
While hyperconnectivity is another evolution for the first world, it can be a fundamental rethink for the developing world.

Hyperconnected GSM-r solutions for the developing world

Friday, November 16, 2007

undefined reference to `QApplication::QApplication

Add the following Linker flags: -lQtGui -lQtCore

In Eclipse, it would be under
Under Project -> Properties -> GCC C++ Linker -> Miscellaneous

Wednesday, November 14, 2007

Linker error and sipXtapi

If you get the following error:

Invoking: GCC C++ Linker
g++ -L../../src/.libs/ -L../../src/libs/ -lQtGui -lQtCore -o"my_sipXezPhone" ./fred.o ./main.o ./moc_myqtapp.o ./myqtapp.o

./myqtapp.o: In function `myQtApp::show()':
/root/sipXtapi/sipXcallLib/workspace/my_sipXezPhone/Debug/../myqtapp.cpp:115: undefined reference to `sipxInitialize'
collect2: ld returned 1 exit status

It because the linker cannot find the object file. Do the following

Add the following path to the linker library search path: directory where sipxtapi is installed/sipXcallib/src/.libs/

Add the following entry to the linker libraries: sipXcall

In Eclipse, it would look like this:

A couple of webpages that helped me get to the solution:

http://developer.apple.com/documentation/DeveloperTools/gcc-3.3/gcc/Link-Options.html

Relevant text:

-l library
Search the library named library when linking. (The second alternative with the library as a separate argument is only for POSIX compliance and is not recommended.)

It makes a difference where in the command you write this option; the linker searches and processes libraries and object files in the order they are specified. Thus, foo.o -lz bar.o searches library z after file foo.o but before bar.o. If bar.o refers to functions in z, those functions may not be loaded.

The linker searches a standard list of directories for the library, which is actually a file named liblibrary.a. The linker then uses this file as if it had been specified precisely by name.

The directories searched include several standard system directories plus any that you specify with -L.

Normally the files found this way are library files—archive files whose members are object files. The linker handles an archive file by scanning through it for members which define symbols that have so far been referenced but not defined. But if the file that is found is an ordinary object file, it is linked in the usual fashion. The only difference between using an -l option and specifying a file name is that -l surrounds library with lib and .a and searches several directories.

http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html

Tuesday, October 09, 2007

O-2875 requirements

I'm a bit obsessed on Edward Tufte at the moment, specially the following quote:
"Clutter and confusion are failures of design, not attributes of information"

One prime example: GSM specifications. Essentially, the documents are verbal representations of information that would far better be expressed in a visual form.
I shall use the latest O-2875 spec which defines

Essentially, the whole convoluted document can be expressed in a couple of diagrams, diagrams which are essentially not too complicated.

I was suprised how little information is actually contained in the document when I finished with the first diagram. The whole document defines four measurements under different configurations, REC call, VGCS/VBS call origianted by Mobile, fixed line dispatcher and terminated to the same.

Wednesday, September 19, 2007

Accessing User to User Info in a SIP invite message via sipXtapi

My task: to allow a SIP GUI to access UUI (User to User Information) sent in a SIP invite message

In the GSM-r environment, it is used for sending functional numbers between subscribers and any SIP application in the GSM-r world must support this.

Let's start at the end and end at the beginning...with some forays to the beginning in between

SIP applications built on top of sipXtapi are provided access to the data received via the SIPXTAPI_API. So any data that needs to be accessed by the SIP application will need an API defined here. For the UUI, I defined the following:

SIPXTAPI_API SIPX_RESULT sipxCallGetUsertoUserInfo(const SIPX_CALL hCall,
char* szId,
const size_t iMaxLength) ;

SIPXTAPI_API SIPX_RESULT sipxCallGetUsertoUserInfo(const SIPX_CALL hCall,
char* szId,
const size_t iMaxLength)
{
OsStackTraceLogger stackLogger(FAC_SIPXTAPI, PRI_DEBUG, "sipxCallGetUsertoUserInfo");
OsSysLog::add(FAC_SIPXTAPI, PRI_INFO,
"sipxCallGetUsertoUserInfo hCall=%d",
hCall);

SIPX_RESULT sr = SIPX_RESULT_FAILURE ;
UtlString callId ;
UtlString UUSInfo ;

if (sipxCallGetCommonData(hCall, NULL, &callId, NULL, NULL, NULL, NULL, &UUSInfo))
{
if (iMaxLength)
{
strncpy(szId, UUSInfo.data(), iMaxLength) ;
szId[iMaxLength-1] = 0 ;
}
sr = SIPX_RESULT_SUCCESS ;
}

return sr ;
}

This is an exact copy of another sipxCallGet routine and I did so to
1. Keep a consistant format to the calls
2. To minimize the amount of work I had to do

The API makes a call to sipxCallGetCommonData and so that procedure had to be changed to accomodate this new call. The changes are shown in bold italic (as opposed to non-bold italic for old code)

UtlBoolean sipxCallGetCommonData(SIPX_CALL hCall,
SIPX_INSTANCE_DATA** pInst,
UtlString* pStrCallId,
UtlString* pStrRemoteAddress,
UtlString* pLineId,
UtlString* pGhostCallId,
UtlString* pContactAddress,
UtlString* pUsertoUserInfo,
UtlString* pPriority)
{
OsStackTraceLogger logItem(FAC_SIPXTAPI, PRI_DEBUG, "sipxCallGetCommonData");

UtlBoolean bSuccess = FALSE ;
SIPX_CALL_DATA* pData = sipxCallLookup(hCall, SIPX_LOCK_READ, logItem);
if (pData)
{
if (pInst)
{
*pInst = pData->pInst ;
}

if (pStrCallId)
{
if (pData->sessionCallId)
{
*pStrCallId = *pData->sessionCallId ;
}
else
{
*pStrCallId = *pData->callId ;
}
}

if (pStrRemoteAddress)
{
if (pData->remoteAddress)
{
*pStrRemoteAddress = *pData->remoteAddress;
}
else
{
pStrRemoteAddress->remove(0) ;
}
}

if (pLineId)
{
*pLineId = *pData->lineURI ;
}

if (pGhostCallId)
{
if (pData->ghostCallId)
{
*pGhostCallId = *pData->ghostCallId;
}
}

if (pContactAddress)
{
if (pData->contactAddress)
{
*pContactAddress = *pData->contactAddress;

}
}
if (pUsertoUserInfo)
{
if (pData->UsertoUserInfo)
{
*pUsertoUserInfo = *pData->UsertoUserInfo;
}
}
bSuccess = TRUE ;

sipxCallReleaseLock(pData, SIPX_LOCK_READ, logItem) ;
}

return bSuccess ;
}


sipXCallGetCommonData basically looks up the call data which is stored in SIPX_CALL_DATA. So if there's info that a SIP application needs to access, it needs to be declared there and then stored there so that sipXCallGetCommonData can access it.

Here's the declaration of the new field in SIPX_CALL_DATA

typedef struct SIPX_CALL_DATA
{
UtlString* callId;
UtlString* sessionCallId;
UtlString* ghostCallId;
UtlString* remoteAddress ;
UtlString* lineURI ;
UtlString* contactAddress ;
SIPX_LINE hLine ;
SIPX_INSTANCE_DATA* pInst ;
OsRWMutex* pMutex ;
SIPX_CONF hConf ;
SIPX_SECURITY_ATTRIBUTES security;
SIPX_VIDEO_DISPLAY display;
UtlBoolean bRemoveInsteadOfDrop ; /** Remove the call instead of dropping it
-- this is used as part of consultative
transfer when we are the transfer target
and need to replace a call leg within
the same CpPeerCall. */
SIPX_CALLSTATE_EVENT lastCallstateEvent ;
SIPX_CALLSTATE_CAUSE lastCallstateCause ;

SIPX_MEDIA_EVENT lastLocalMediaAudioEvent ;
SIPX_MEDIA_EVENT lastLocalMediaVideoEvent ;
SIPX_MEDIA_EVENT lastRemoteMediaAudioEvent ;
SIPX_MEDIA_EVENT lastRemoteMediaVideoEvent ;

SIPX_INTERNAL_CALLSTATE state ;
UtlBoolean bInFocus ;
int connectionId; /** Cache the connection id */
SIPX_TRANSPORT hTransport;
bool bHoldAfterConnect; /** Used if we are the transfer target, and the
replaced call is HELD or REMOTE_HELD, then
this flag is set, and indicates that the call
should be placed on hold after the connection
is established. */
bool bCallHoldInvoked; /** Set to true if sipxCallHold has been invoked.
Set to fales if sipxCallUnhold has been invoked. */
bool bTonePlaying;
int nFilesPlaying;
UtlString* UsertoUserInfo;
} SIPX_CALL_DATA ;

Looking up all the uses of SIPX_CALL_DATA forincoming calls, almost all of them are looks ups using sipx_call_lookup. Thankfully (i assume due to good coding practices), there is only one place where SIPX_CALL_DATA is initialized with data for an incoming call, which is in sipxFireCallEvent and so I added some calls there to save the data to SIPX_CALL_DATA .

void sipxFireCallEvent(const void* pSrc,
const char* szCallId,
SipSession* pSession,
const char* szRemoteAddress,
SIPX_CALLSTATE_EVENT event,
SIPX_CALLSTATE_CAUSE cause,
void* pEventData,
const char* szRemoteAssertedIdentity)
{
OsStackTraceLogger stackLogger(FAC_SIPXTAPI, PRI_DEBUG, "sipxFireCallEvent");
OsSysLog::add(FAC_SIPXTAPI, PRI_INFO,
"sipxFireCallEvent Src=%p CallId=%s RemoteAddress=%s Event=%s:%s",
pSrc, szCallId, szRemoteAddress, convertCallstateEventToString(event), convertCallstateCauseToString(cause)) ;

SIPX_CALL hCall = SIPX_CALL_NULL;

SIPX_CALL_DATA* pCallData = NULL;
SIPX_LINE hLine = SIPX_LINE_NULL ;
UtlVoidPtr* ptr = NULL;

SIPX_INSTANCE_DATA* pInst ;
UtlString callId ;
UtlString remoteAddress ;
UtlString lineId ;
UtlString contactAddress ;
SIPX_CALL hAssociatedCall = SIPX_CALL_NULL ;
// Prashant
UtlString UUS;
UtlString Priority;

// If this is an NEW inbound call (first we are hearing of it), then create
// a call handle/data structure for it.
if (event == CALLSTATE_NEWCALL)
{
pCallData = new SIPX_CALL_DATA;
memset((void*) pCallData, 0, sizeof(SIPX_CALL_DATA));
pCallData->state = SIPX_INTERNAL_CALLSTATE_UNKNOWN;

pCallData->callId = new UtlString(szCallId) ;
pCallData->remoteAddress = new UtlString(szRemoteAddress) ;
pCallData->pMutex = new OsRWMutex(OsRWMutex::Q_FIFO) ;

pSession->getUsertoUserInfo(UUS);

Url urlFrom;

pCallData->lineURI = new UtlString(urlFrom.toString()) ;
pCallData->pInst = findSessionByCallManager(pSrc) ;

hCall = gpCallHandleMap->allocHandle(pCallData) ;
pInst = pCallData->pInst ;

if (pEventData)
{
char* szOriginalCallId = (char*) pEventData ;
hAssociatedCall = sipxCallLookupHandle(UtlString(szOriginalCallId), pSrc) ;

// Make sure we remove the call instead of allowing a drop. When acting
// as a transfer target, we are performing surgery on a CpPeerCall. We
// want to remove the call leg -- not drop the entire call.
if ((hAssociatedCall) && (cause == CALLSTATE_CAUSE_TRANSFERRED))
{
// get the callstate of the replaced leg
SIPX_CALL_DATA* pOldCallData = sipxCallLookup(hAssociatedCall, SIPX_LOCK_READ, stackLogger);
bool bCallHoldInvoked = false;
if (pOldCallData)
{
bCallHoldInvoked = pOldCallData->bCallHoldInvoked;
sipxCallReleaseLock(pOldCallData, SIPX_LOCK_READ, stackLogger);
}

if (bCallHoldInvoked)
{
SIPX_CALL_DATA* pData = sipxCallLookup(hCall, SIPX_LOCK_WRITE, stackLogger);
if (pData)
{
pData->bHoldAfterConnect = true;
sipxCallReleaseLock(pData, SIPX_LOCK_WRITE, stackLogger);
}
}
sipxCallSetRemoveInsteadofDrop(hAssociatedCall) ;

SIPX_CONF hConf = sipxCallGetConf(hAssociatedCall) ;
if (hConf)
{
sipxAddCallHandleToConf(hCall, hConf) ;
}
}
else if ((hAssociatedCall) && (cause == CALLSTATE_CAUSE_TRANSFER))
{
// This is the case where we are the transferee -- we want to
// make sure that the new call is part of the conference
SIPX_CONF hConf = sipxCallGetConf(hAssociatedCall) ;
if (hConf)
{
// The original call was part of a transfer -- make sure the
// replacement leg is also part of the conference.
sipxAddCallHandleToConf(hCall, hConf) ;
}
}
}

// Increment call count
pInst->pLock->acquire() ;
pInst->nCalls++ ;
pInst->pLock->release() ;

callId = szCallId ;
remoteAddress = szRemoteAddress ;
lineId = urlFrom.toString() ;

pCallData->UsertoUserInfo= new UtlString(UUS) ;
.
.
.
.



Now to the very beginning for the rest of the explanation. UUI info in an invite message is received as: User-To-User: 0506102011223320;pd=1

First off we'll need to extract the info from the invite message and sipXtapi makes this pretty simple.

We'll need to define the UUI field in SipMessage.h

#define SIP_USER_TO_USER_INFO_FIELD "User-To-User"

and then define a method to extract the UUI in the SIPMessage class.

UtlBoolean SipMessage::getUsertoUserInfo(UtlString& eventField) const

{

const char* value = getHeaderValue(0, SIP_USER_TO_USER_INFO_FIELD);

eventField.remove(0);

if(value)

{

eventField.append(value);

}

return(value != NULL);

}

UtlBoolean SipMessage::getUsertoUserInfo(UtlString& eventField) const

{

const char* value = getHeaderValue(0, SIP_USER_TO_USER_INFO_FIELD);

eventField.remove(0);

if(value)

{

eventField.append(value);

}

return(value != NULL);

}

This is where I got stuck for a while. I know how to extract the data and I know where to store the data for SIP appliation but how do I get the extracted data to the variable where it needs to be stored.

sipXFireCallEvent has access to class SIPSession, but not class SIPmessage. SIPSession in turn is used in methods which have access varaibles from SIpConnection which in turn has access to SIPmessage. So the data would have to be copied from SIPmessage to SIPConnection , then to SIPsession and then frinally copied from SIPsession into SIPX_CALL_DATA. I think at the very least, I should've been able to avoid SIPConnection but that proved impossible. The methods that had access to SIPmessage and SIPsession were not being hit when the data was available yet in SIPMessage. So, no other way that to go via SIPSession

First up, copying the data from SIPMessage to SIPConnection
A new variable in SIPConnection to hold the UUI

UtlString mUsertoUserInfo;

The variable is populated in method processInviteRequestOffering

void SipConnection::processInviteRequestOffering(const SipMessage* request,
int tag,
UtlBoolean doesReplaceCallLegExist,
int replaceCallLegState,
UtlString& replaceCallId,
UtlString& replaceToTag,
UtlString& replaceFromTag)
{
UtlString callId ;

getCallId(&callId) ;
request->getCSeqField(&lastRemoteSequenceNumber, NULL) ;

// Save a copy of the INVITE
inviteMsg = new SipMessage(*request);
inviteFromThisSide = FALSE;
setCallerId();

// Set the to tag if it is not set in the Invite
if(tag >= 0)
{
inviteMsg->setToFieldTag(tag);

// Update the cached from field after saving the tag
inviteMsg->getToUrl(mFromUrl);
}

// Save line Id
UtlString uri;
request->getRequestUri(&uri);
// Convert the URI to name-addr format.
Url parsedUri(uri, TRUE);
// Store into mLocalContact, which is in name-addr format.
parsedUri.toString(mLocalContact);

request->getUsertoUserInfo(mUsertoUserInfo);

Then we need to copy the data from SIPConnection to SIPSession. Again create the variable to hold the data

UtlString mUUSInfo;

And additionally, a new method to populate the new variable

UtlBoolean getUsertoUserInfo(UtlString& UUSinfo);

The methods itself is quite simple:

UtlBoolean SipSession::setUsertoUserInfo(UtlString& UUSinfo)

{

mUUSinfo = UUSInfo;

return(UUSinfo != NULL);

}


The last question is where do you call this methods to populate the info correctly. After much searching, the only place that made sense was in a method called getSession (which seems wierd to me as it seems to be a method that gets data rather than writes data.

UtlBoolean SipConnection::getSession(SipSession& session)
{
UtlString callId;
getCallId(&callId);
SipSession ssn;
UtlString temp;
ssn.setCallId(callId.data());
ssn.setLastFromCseq(mCSeqMgr.getCSeqNumber(CSEQ_ID_INVITE));
ssn.setLastToCseq(lastRemoteSequenceNumber);
ssn.setFromUrl(mFromUrl);
ssn.setToUrl(mToUrl);
// Prashant
ssn.setUsertoUserInfo(mUsertoUserInfo);
.
.
.

And finally, we have to assign the variable to the current instance of the SIPSession

SipSession::operator=(const SipSession& rhs)
{
if (this == &rhs) // handle the assignment to self case
return *this;

UtlString::operator=(rhs); // assign fields for parent class


mLocalUrl = rhs.mLocalUrl;
mRemoteUrl = rhs.mRemoteUrl;
mLocalContact = rhs.mLocalContact;
mRemoteContact = rhs.mRemoteContact;
mInitialMethod = rhs.mInitialMethod;
mInitialLocalCseq = rhs.mInitialLocalCseq;
mInitialRemoteCseq = rhs.mInitialRemoteCseq;
mLastFromCseq = rhs.mLastFromCseq;
mLastToCseq = rhs.mLastToCseq;
mSessionState = rhs.mSessionState;
msLocalRequestUri = rhs.msLocalRequestUri;
msRemoteRequestUri = rhs.msRemoteRequestUri;
msContactUriStr = rhs.msContactUriStr;
mUUSInfo=rhs.mUUSInfo;


At this point, when sipXFireCallEvent is called, mUUSInfo contains the correct inforamtion to be written into SIPX_CALL_DATA and to be used by the SIP_API

Thursday, June 21, 2007

Using Wikis to decentrailze management of a lab

Problem: A large lab enviroment. Several MSCs, with different software loads. Several BSSs, again running different software loads all of which can be connected at different times to different MSCs. All these BSSs have multiple cells, whose coverage can be connected to different offices. (There is more equipment on top of this that needs to be tracked but I'll use this basic setup to show how wikis can be used to decentralize the management of this lab environment)
In the past, there was a single person who was in charge of this. Any changes to the configuration went through him/her. He tracked the configuration, triggered and tracked the changes, and if there were any problems, was the point of contact for that as well. He updated the config books, kept track of data changes to all the nodes (which with a large environment used by numerous projects was a big task), did the short term booking for the nodes and the long term planning for the lab, and was in charge of making sure the lab was correctly configured for each project using each node, i.e. patches were up to date, config that was requested was correct, and the lab was working when it was needed.
Then came the budget cuts. The position wasn't eliminated as such but the funding for a dedicated person disappeared. So we were left with a task that remained unchanged but no actual budget to address that task.
The first attempt to remedy the situation was to split the role between many individuals: One person for each MSC, one person for the BSSs, and one person in charge overall. They took over the tasks in addition to their usual tasks. As things got hectic however, this was the task was the first to be ignored.
On the one hand, this kind of bookkeeping task is abhorred by engineers. It is also the most thankless. No one notices when things are going well. That's what's expected but when things aren't going well, the amount of complaining is frustrating. Its a very obvious failure and people really like to stick it to you that you're not doing your job. It seems like whatever frustrations are being felt are taken out on the lab managers when things go wrong.
It was a bit of a Catch-22. On the one hand, you cannot pass this task on to someone without the requisite experience. That's an immediate recipe for disaster. On the other hand, experience and knowledge is required for other projects as well and allocating that for lab upkeep seems a bit of a waste.
It was quickly realized that centralizing the task even further as not option given the manpower available and the whole team needed to step up and address this from a decentralized point. The wiki turned out be a good solution to the problems we faced though it in itself is not a full solution.

Short term Booking
Booking is done via the wiki on a first come/first serve basis. It is done on an MSC basis and on a cell basis. Any conflicts are ironed out between the people first and then via the project managers. The complaints are that a previous booking can be overwritten without the previous person's knowledge. That's true but the wiki does keep track of who's making the changes and you can get emails when pages are changed. So there are ways to track this. In the end though, if there's an a-hole in your department, a wiki gives that person more ways of being an a-hole.

Long Term planning
This has been taken over by a manager. It cannot be taken over via a wiki. It really requires one person looking at the inputs and deceiding what's required in the future.

Updating the config books
Config books are updated per person based on the changes made. If you make a change and it is not reflected in the config book, then a change to he configuration noted in the config book can be made at any time. This gives people an impetus to update the config book when a change is made. This method functions most fruitfully when the configurations are checked to make sure they match the config book. A person to check that the config books are up to data is required (pehaps a manager) but they avoid being the target of any bitching. Everyone knows the policy and if it is not followed, well then...tough.

Hardware changes
This can be decentralized based on experience. Caveat: A change that is not done correctly can hose the system and finding out what that change is can be time consuming. We went through stages where all changes were decentralized and based on our experiences, have centralized some tasks and left others to the herd. The problem with centralizing the task is that the people whose responsibility it is need to have time to do the task requested. In our lab, based on the number of projects running concurrently and the changes required, it may be beyond the capacity of the person put in charge. It really should be decentralized completely but compaints have been too numerous.
Once a hardware change is made, the wiki is updated with the change. Thus everyone can check what the current configuration is and what needs to be requested or done.

Making sure a lab is correctly configured
This again is decentralized to a degree. There are designated people to take care of the activities that should happen at regualr intervals. For anything that is specific to a project, the person running the project is responsible. Any changes that causes a system wide change are noted on specific wiki pages. So if another project is seeing some wierd issues, then there are specific wiki pages to check. Again, there are complaints but no one has been able to come up with a better system and the system currently works pretty well, akin to PGP.

The main problem with the current system is people not updating the wiki. The whole system depends on that happening but especially when things are extra busy, updating the wiki is not upmost in people's minds. We have threatened punitive measures to keep transgressors in line but none have been actually imposed. It just seems like it would cause more pissed off people than anything else. Less adherence rahter than more.

At the moment, the benefits of using the wiki outweight the costs. It frees us, in a large part, from a single point of contact for much of lab mangement and it frees a single person from mind numbing work. However, the amount of work that goes into lab management has decreased significantly and in general, the lab isn't in as good a shape as before. However, the risks added to specific projects due to this change have been minimal, and the quality of our products have not suffered due to this change.

Tuesday, April 03, 2007

How not to implement an automation platform

The task that my team works on is the end to end test and integration of a communications network. Struggling though a period of mass layoffs where there weren't enough minds and bodies available to perform the minimum tasks required to get a robust product out to our customers, we took the forward looking approach and dedicated some mind/timeshare into implementing an automation platform. The idea was that it would allow us to increase our work load by shifting repetitive tasks to automation while allowing us greater test coverage plus it would allow us to focus our energies on the interesting work, the non-repetitive stuff.

One thing I immediately learned is that engineers like to implement automation platforms. I liked it. Its fun. Challenging. The meetings were full of energy and ideas. I also learned that engineers do not like debugging automation platforms. Once the software and hardware was up and running and the platform was proven functional, working on stability issues was the last thing anyone wants to do. The answer from everyone always was "Well, it worked for me. Must be your setup/testcase/server/coverage." It was never the automation platform that was the problem.

Work on the platform started in 2004 and this week, the whole project was finally put to rest. What follows is the post-mortem, i.e. what not and what to do when implementing an automation platform
1. There were some repetitive tasks being done at the time we undertook the automation platform but they were not large. It was maybe a day's worth of work every month. The thought when this started was that those repetitive tasks would get larger or that we would run automated tests more often if an automation platform was available. There were a range of tests that we could cover with the platform that were not being done today.
It turned out that the tests that were being run repeatedly were not unmasking bugs. We were running the tests which were always passing. A warm fuzzy feeling followed but nothing else. After a while, it became apparent that the tests were a waste of time and they were stopped. However, the link to the work on automation platform was not apparent. We continued working on the automation platform even though the initial requirement was no longer valid. The thought was to keep looking forward without really looking forward.
It also turned out that the problems that were being reported by customers were not those that could be uncovered by automation. So the range of tests that was not being done and potentially could be covered by automation was not at all essential to the stability or robustness of the product. Yet again, the link to the work on the automation platform was not made. It seemed like the work on automation already in progress could not be stopped. Time and effort already spent required more time and effort. Though apparent now, it wasn't really apparent at the time.
Finally, no one like doing repetitive tasks, even if they're simple to do. That was the reason to undertake the automation platform. On the same hand, no one takes on more repetitive tasks just because they're made easier. They're still repetitive. Its still doing the same thing again and the overhead of testing is still there. I've never heard an engineer say " I've got some time. Let me run some automated testcases" I've often heard an engineer say "F****** automated testcases again. F***"

2. Testing an end to end network where software on multiple nodes is updated frequently in probably not the ideal environment for an automation platform. But that one of the key drivers to begin with. We have so many changes to the network that we need an low labor intensive method to see if basic functionalities were still functioning. However, the testing required to cover the software changes were always so specific that the effort to automate those tests never made sense. It seemed that there were a lot of tests in a lot of different testplans that were alike (probably because basic tests were always included and testplans were copied from one another) but the bulk of the work were tests that were unique and time consuming. Tests that could not be automated. So the automation platform wasn't really making a dent in our workload.

Configuring an automation system where so many changes were happening also required quite a bit of work. Automation does not respond well to large scale changes and the work required for upkeep of the system was large. In the end, the total effort saved was nil or maybe even in the negative. Lack of minds/bodies was exacerbated by the need for reconfiguring. Automation was not really saving us time. People were giving up on using the automation platform cause so much reconfiguring and tweaking was needed before their automation runs. Since there was no central person responsible for the platform, the person who required the automation run was basically also responsible to make sure it was in working order before they began. That resulted in a lot of frustration and people began giving up on the platform before they even really used it.

3. We did not have the budget for a dedicated system for the automation platform. We basically had to run our tests on a system that was under test at the same time by a lot of other users. A system with multiple nodes, each of which was simultaneously being tested by multiple people. Initially, the thought was that we would either be able to run automation when the system was not being used by other people,overnight for instance, or we would be able to create our own system within the system that could be shielded from other users. Both never fully worked.

Resetting the system to a known initial starting point after use by many users was a nightmare. Things were changing that should not have been. I think we could've made this work, if there were people managing the changes. Lack of minds and bodies precluded us from setting that up. In our environment, managing changes meant an email saying something was going to be changed unless they heard otherwise was sent around. Rarely did they hear otherwise.

Automation does not deal well to non-standard starting points. Changes to system under test can and do have severe effects on test results and when running suites of testcases, they produce totally incoherent results.

I have read several documents that say a dedicated system is needed for automation. I do not think this is absolutely necessary. This can be gotten around by having a set of people managing the changes to the system. However, non dedicated system and all testers managing their own changes is a recipe for disaster.

4. Writing testcases for automation was deemed to be one of the easier activities. We handed off the activity to a couple of contractors who ,with a few inputs from us, basically came up with the framework, wrote the testcases, verified them and passed them to us for a second round of verification. Based on the fact that they were getting paid per testcase, problems they were facing with robustness really didn't cause them any bother. To them, it wasn't the testcase that was the problem. It was the SUT or the automation platform. Of course, some of their failures had to do with both those things but the design of their testcases only exacerbated those issues. We also only saw a list of testcases that were being automated. No further info. In the end, we got a set of testcases that ran and passed sometimes and failed other times , even when there were no changes to the SUT. They were useless. We did get a framework for writing testcases but that may have been a curse in disguise as we never got around to totally rewriting the testcases from the ground up as we felt we had some work already done.
The testcases have been through a few revisions. now looking nothing like the testcases originally writing and one can get consistent results from them. But not with all the other problems.

Moral of the story: Automation is not a substitute for minds and bodies. It only shifts the work and it is continual if the SUT is continuously changing

Monday, February 19, 2007

aTCA and the Telecom Service Provider

The promise of aTCA for telecom service providers is that it will unshackle then from their current relationship with their core network vendors. Currently, the core network vendors provide multiple proprietary large boxes plus huge amounts of proprietary software which cost millions of dollars. Switching vendors involves again the expenditure of millions of dollars, which the vendor knows cannot and will not be done lightly. Once a vendor has a foot in the door of the provider’s CO, the vendor kind of has the provider by the balls.

aTCA promises to reduce the huge capitol expenditures associated with changing vendors. The process could be as simple as switching out one module for another, in which case the capitol expenditure is the cost of the module, in the ten of thousands of dollars, plus the cost of software. The process could even be simplified to loading software from a different vendor onto an already installed blade. aTCA threatens to upend the current relationship between service providers and equipment vendors and could allow new vendors onto a marketplace that was closed to so many due immense capital expenses required.

In order to maximize the promise of aTCA, the telecom providers should insist on a chassis/backplane that meets the aTCA standard. It would be even better if they bought the chassis/backplane separately and then asked vendors to supply modules for the chassis that meet the various functions needed for their network. In this way, the provider forces vendors to meet the aTCA requirements and allows them the ability to mix and match modules from different vendors. Given the opportunity, a telecom provider will sell a chassis that is only compliant to their modules, thereby locking in the provider to their products. This approach prevents that. In addition, it increases the number of vendors that the provider has to choose from.

Another step would be for the provider is to specify the hardware modules that fit into those shelves as well. They then by the modules on their own, and then ask their traditional telecom vendors to only supply the software required for those modules. This gets to the heart of what telecom vendors do best; providing robust software for off the shelf parts. This approach also allows for the new vendors on the telecom marketplace, companies that did not have the capitol to enter a hardware-software business but have the wherewithal to enter a purely software space.

Sunday, December 03, 2006

Using javax.comm from your jar file

I tried all the suggestions for including javax.comm to my jar file but none of them worked.
The problem I kept having was that the jar file I generated would never run for someone who did not have javax.comm installed. The only solution that worked for me was the following:
Maifest file contained the following line:
Class-Path: .\comm.jar

And comm.jar must be present in the same directory as your jar file, i.e. I sent comm.jar along with my jar file and asked users to put both in the same directory.

This worked for java version 1.5.0_09 but did not work for 1.4.x

Monday, August 28, 2006

Measuring Transfer Delay of Circuit Switched Data calls in Cellular Networks

Here is the requirement from Document No. O-2475, ERTMS/GSM-R Quality of Service Test Specification
3.4 Transfer delay of user data frame
3.4.1 Definition:
This is the value of the elapsed time between the request for transfer of a data frame and the indication of successfully transferred end-to-end data frame.
3.4.2 Pre-conditions for measurement:
Measurement interfaces are IGSM(T) and IFIX(T), measurement point is IGSM(T)
Only successfully received user data frames (ie data frames received with a correct CRC check sum) are evaluated.
The length of data frame shall be 30 bytes.
The fixed side responding application is a test application responsible for echoing all incoming data frames back to the sender.
The response time of the test application shall be very small and negligible.
3.4.3 For measurement:
Round trip delay is an allowed measurement procedure.
The test is performed by sending and receiving bytes and in the test application represented by ASCII characters.
At IGSM(T), value of the half of elapsed time between start of transmission of a user data frame and end of reception of the same frame echoed back from B-subscriber terminal application is evaluated as transfer delay. The term 'user' refers to the user of the GSM-R bearer service.
3.4.4 Recommended tools:
Test application for control of terminal and automation (scripting).
Test terminal for tracking possible failure in automated test, and/or
Abis protocol analyser for tracking possible failure.
GPS for position information.


This was accomplished using Rexx scripts on two PCs connected to mobiles via the serial port. The Rexx script set up the connection between the two mobiles. On the initiator side, the script prints out a time stamp before it started transmitting data. On the receiving side, once the expected data has been received, a time stamp is again printed. According to the definition in the test defined above, the tranmission delay time is half the elapsed time between the two printed times.

Rexx Script for Originator:
/* INITIALIZATION + GET NUMBER */
OK=0
BUSY=0
FAIL=0
TOUT=0
ERROR=0
TIMEOUT=0
Speed="9600" /* Data Speed */
CALL ZocCls /* Clear Screen */
SAY "Initialization starts, please wait..."
CALL ZocSend "at+cr=1^M"
CALL ZOCDELAY 1
CALL ZocSend "at+crc=1^M"
CALL ZOCDELAY 1
CALL ZocSend "at+cmee=1^M"
CALL ZOCDELAY 1
CALL ZocSend "atx1^M"
CALL ZOCDELAY 1
CALL ZocSend "ats0=1^M"
CALL ZOCDELAY 1
CALL ZocSend "ats2=67^M"
CALL ZOCDELAY 1
SAY "Initialization done..."

/* Loop of data calls */
ZocSend "AT+CBST=71,0,0^M"

CALL ZOCDELAY 1 /* WAIT 1 SECOND */
CALL ZocTimeout 45
/* Dial the NUMBER */
/* CALL ZocSend "ATD"NUMBER"^M" */
CALL ZocSend "ATD8188074^M"

/* Check for the result */

DO LINE=1 TO 3
timeout=ZocGetLine()
IF timeout=640 THEN DO
TOUT=TOUT+1
SAY
SAY "XXXXX Timeout during Try " TRY "."
LEAVE LINE
END
IF LEFT(ZOCLASTLINE(),7)="CONNECT" THEN DO
OK=OK+1
CALL ZOCDELAY 4
timenow=time('L')
say 'Time' timenow
CALL ZocSend "00000000000000000000000000000" TRY
SAY ">>>>> Testtransfer " TRY " is sent."
CALL ZOCDELAY 1
ZocWait "<<<<< Testtransfer" /* to wait for answer from datacall_receive Script */
CALL ZOCDELAY 2
LEAVE LINE
END /* IF */
IF LEFT(ZOCLASTLINE(),4)="BUSY" THEN DO
CALL ZocBeep 1
BUSY=BUSY+1
LEAVE LINE
END /* IF */
IF LEFT(ZOCLASTLINE(),10)="NO CARRIER" THEN DO
CALL ZocBeep 1
FAIL=FAIL+1
LEAVE LINE
END /* IF */
IF LEFT(ZOCLASTLINE(),5)="ERROR" THEN DO
CALL ZocBeep 1
ERROR=ERROR+1
LEAVE LINE
END /* IF */
END LINE

CALL ZOCDELAY 5 /* WAIT 5 SECONDS */
CALL ZocSend "+++"
SAY
SAY "+++ IS SENT"

CALL ZOCDELAY 5 /* WAIT 5 SECONDS */
CALL ZocSend "ATH^M"

CALL ZOCDELAY 10 /* WAIT 10 SECONDS */


Rexx script for Terminator
/* REXX test script for receive of data calls */

/* INITIALIZATION + GET NUMBER */
OK=0 /* counter */
BUSY=0 /* counter */
FAIL=0 /* counter */
TIMEOUT=0 /* counter */
CALL ZocTimeout 60 /* timeout value for ZocWait */
Speed="9600" /* Data Speed */
CALL ZocCls /* Clear Screen */
SAY "Initialization starts, please wait..."
CALL ZocSend "at+cr=1^M"
CALL ZOCDELAY 1
CALL ZocSend "at+crc=1^M"
CALL ZOCDELAY 1
CALL ZocSend "at+cmee=1^M"
CALL ZOCDELAY 1
CALL ZocSend "atx1^M"
CALL ZOCDELAY 1
CALL ZocSend "ats0=1^M"
CALL ZOCDELAY 1
CALL ZocSend "ats2=67^M"
CALL ZOCDELAY 1
SAY "Initialization done..."

/* Loop of 100 data calls */
DO TRY=1 TO 10

CALL ZocSend "AT+CBST=71,0,0^M"

CALL ZOCDELAY 1 /* WAIT 1 SECOND */
SAY "---> Waiting for RING"

ZocWait "+CRING"
SAY "---> Waiting for CR"

ZocWait "+CR:"
SAY "---> Waiting for CONNECT"

ZocWait "CONNECT"
SAY "---> Waiting for DATA"

ZocWait "00000000000000000000000000000"
timenow=time('L')
say 'Time' timenow

CALL ZOCDELAY 1 /* WAIT 1 SECOND */
SAY
CALL ZocSend "<<<<< Testtransfer back: " TRY
SAY "<<<<< Testtransfer number " TRY " is sent back."
CALL ZOCDELAY 1 /* WAIT 1 SECOND */
SAY "---> Waiting for NO CARRIER"

ZocWait "NO CARRIER^M"
CALL ZOCDELAY 1 /* WAIT 1 SECOND */
SAY "##################################################################"

END TRY
CALL ZocBeep 2

Saturday, April 01, 2006

Carnival of the Mobilists #21

You can find the latest edition @
http://www.mopocket.com/2006/03/the_21st_carnival_of_the_mobil.php

Some interesting thoughts on cellphone culture from Howard Rheingold and Mimi Ito.
I completely agree that the most important applications of the next generation of mobile culture will be those that are adopted or appropriated by kids on the streets of Shanghai or Rio or Bombay (probably not Milan), places where competing technologies, landlines, broadband, etc, are less reliable or widely available and mobiles are the main form of voice/data access.

A few articles applauding the EU's decision to abolish roaming charges within Europe
I think this is the wrong way to go about this (maybe its an American view) as a market as competitive as wireless should be able to set competitive rates without interference. If all carriers are setting the same price, then its either that they need the revenue or they're in collusion with one another. If it made no economic sense, then at least one of the carriers would be advertising no roaming charges to grab more customers. If they're in collusion, then the anti-trust laws should be invoked against them. Setting prices seems a bit arcane.
It appears to me like this is a decision to prove that the EU commision is doing something rather than a decision that will actually produce tangible results to users at large.

Some interesting questions about skyping.
What don't Iike about Skype? The free version that is. It adds to the unmanageable interruptions in my lie. Landlines or mobiles come with voice mail. Messaging enables me to send back a response when I have the time. But Skype. If I don't answer the call, then that conversation is lost. Either I or the perosn that called have to remember to retry. And if you do answer, then whatever you were doing is interrupted. Its a 21st century technology wrapped in an early 20th century mentality. I need a service that lets me manage my interruptions better. Cause there are so many of them. Skype's not it.

And much more....
  • Book reviews