There are really only two possibilities here — either pq->Array
is being modified to something other than the return value of a malloc()
or realloc()
, which invokes undefined behaviour when you pass it to realloc()
, or the heap is getting corrupted somehow, in which case pretty much anything can happen.
Pointers are easy enough to debug — breakpoint every malloc()
, realloc()
and free()
to make sure the value isn’t changing in between. If it is, track down when, then where, then why.
Heap corruption on the other hand is generally a right pain to debug, since you usually only see the symptoms some time after it occurs, and the errors you get are often nonsense and only tell you something went wrong.
Now, there is a potential heap corruption waiting to happen in the code posted, but it doesn’t seem to be the one causing this crash. Consider when realloc()
runs out of memory here:
- insert function calls
pq_addMem()
for more space pq_addMem()
doublespq->pq_max
realloc()
returnsNULL
,pq->Array
remains the same size.pq_addMem()
returns.- insert function sees
pq->pq_max
is now larger, proceeds to insert nodes off the end of the array overwriting internal heap data. - …
- later call to memory management function tries to interpret corrupted heap, all hell breaks loose.
Double-check all your pointer and array accesses, make sure everything really is the size you think it is, and look out for the sneaky stuff like inadvertently malloc()
‘ing 0 bytes then trying to actually use the pointer it happily returns.
I have developed the following piece of code which works good:
#include "header.h"
int test_DGGEV_11_14a(){
const int n=3;
double a[n][n]={1,1,1,2,3,4,3,5,2};
double b[n][n]={-10,-3,12,14,14,12,16,16,18};
//double a[n][n]={1,7,3,2,9,12,5,22,7};
//double b[n][n]={1,7,3,2,9,12,5,22,7};
/*const int n=2;
double a[n][n]={1e-16,0,0,1e-15};
double b[n][n]={1e-16,0,0,1e-15};*/
lapack_int info;
double alphar[n]={0.0};
double alphai[n]={0.0};
double beta[n]={0.0};
double vl[n][n]={0.0};
double vr[n][n]={0.0};
info=LAPACKE_dggev(LAPACK_ROW_MAJOR,'V','V',n,*a,n,*b,n,alphar,alphai,beta,*vl,n,*vr,n);
std::cout<<"right eigen vector (what we want):\n";
for(int i=int(0);i<n;i++){
for(int j=int(0);j<n;j++){
printf("%1f ",vr[i][j]);
}
printf("\n");
}
std::cout<<"left eigen vector:\n";
for(int i=int(0);i<n;i++){
for(int j=int(0);j<n;j++){
printf("%1f ",vl[i][j]);
}
printf("\n");
}
std::cout<<"eigen values:\n";
for(int i=int(0);i<n;i++){
if(beta[i]>DBL_MIN || beta[i]<-DBL_MIN){
printf("%1f ",alphar[i]/beta[i]);
printf("\n");
}else{
printf("%1f ","beta is zero");
printf("\n");
}
}
return info;
}
I modified the above correct code, to use LAPACKE DGGEV routine for large matrices, the modified code is shown below:
#include "header.h"
int test_DGGEV_11_17a(){
const int r=342;
const int c=342;
double**a=NULL;//stiffness
a=new double *[r];
for(int i=int(0);i<r;i++)
a[i]=new double[c];
readFile("Input_Files/OUTPUT_sub_2_stiffness.txt",a,r,c);
writeFile("Output_Files/K.txt",a,r,c);//to check if readFile was OK
double**b=NULL;//mass
b=new double*[r];
for(int i=int(0);i<r;i++)
b[i]=new double[c];
readFile("Input_Files/OUTPUT_sub_2_mass.txt",b,r,c);
writeFile("Output_Files/M.txt",b,r,c);//to check if readFile was OK
const int n=r;//r=c=n
lapack_int info=110;
double alphar[n]={0.0};
double alphai[n]={0.0};
double beta[n]={0.0};
//double vl[n][n]={0.0};//generates stack overflow
double**vl=NULL;
vl=new double*[r];
for(int i=int(0);i<r;i++)
vl[i]=new double[c];
for(int i=int(0);i<r;i++)
for(int j=int(0);j<c;j++)
vl[i][j]=0.0;
//double vr[n][n]={0.0};//generates stack overflow
double**vr=NULL;
vr=new double*[r];
for(int i=int(0);i<r;i++)
vr[i]=new double[c];
for(int i=int(0);i<r;i++)
for(int j=int(0);j<c;j++)
vr[i][j]=0.0;
info=LAPACKE_dggev(LAPACK_ROW_MAJOR,'V','V',n,*a,n,*b,n,alphar,alphai,beta,*vl,n,*vr,n);
return info;
}
In the above modified code (for large matrices), I have to allocate memory from heap because otherwise, stack would get overflow. The problem is that when I allocate memory from heap by new
I get the following exception which is related to heap and occurs inside dbgheap.c
(Debug CRT Heap Functions):
Does anybody know why this exception happens? maybe it is related to the fact that LAPACKE DLLs are using a different heap for allocations…I don’t know.
EDIT:
the stack trace is this:
EDIT:
Finally solved the problem by replacing all the 2D arrays with 1D arrays. The following code is the corrected code which works without any error. Please see answer of «Ilya Kobelevskiy» for the details of this solution.
int test_DGGEV_11_18a(){
const int r=342;
const int c=342;
double*a=NULL;//stiffness
a=new double [r*c];
for(int i=int(0);i<r*c;i++)
a[i]=0.0;
readFile_1Darray("Input_Files/OUTPUT_sub_2_stiffness.txt",a,r,c);
writeFile_1Darray("Output_Files/K.txt",a,r,c);//to check if readFile was OK
double*b=NULL;//mass
b=new double[r*c];
for(int i=int(0);i<r*c;i++)
b[i]=0.0;
readFile_1Darray("Input_Files/OUTPUT_sub_2_mass.txt",b,r,c);
writeFile_1Darray("Output_Files/M.txt",b,r,c);//to check if readFile was OK
const int n=r;//r=c=n
lapack_int info=110;
//double alphar[n]={0.0};
double*alphar=NULL;
alphar=new double[n];
for(int i=int(0);i<n;i++)
alphar[i]=0.0;
//double alphai[n]={0.0};
double*alphai=NULL;
alphai=new double[n];
for(int i=int(0);i<n;i++)
alphai[i]=0.0;
//double beta[n]={0.0};
double*beta=NULL;
beta=new double[n];
for(int i=int(0);i++;)
beta[i]=0.0;
//double vl[n][n]={0.0};//generates stack overflow
double*vl=NULL;
vl=new double[r*c];
for(int i=int(0);i<r*c;i++)
vl[i]=0.0;
//double vr[n][n]={0.0};//generates stack overflow
double*vr=NULL;
vr=new double[r*c];
for(int i=int(0);i<r*c;i++)
vr[i]=0.0;
info=LAPACKE_dggev(LAPACK_ROW_MAJOR,'V','V',n,a,n,b,n,alphar,alphai,beta,vl,n,vr,n);
std::cout<<"info returned by LAPACKE_dggev:\t"<<info<<'\n';
double*eigValueReal=NULL;
eigValueReal=new double[n];
for(int i=int(0);i<n;i++)
eigValueReal[i]=0.0;
for(int i=int(0);i<n;i++)
eigValueReal[i]=alphar[i]/beta[i];
write1Darray("Output_Files/eigValueReal_LAPACKE_DGGEV.txt",eigValueReal,n);
write1Darray("Output_Files/beta.txt",beta,n);
writeFile_1Darray("Output_Files/eigVectorRight_LAPACKE_DGGEV.txt",vr,r,c);
delete[] a;
delete[] b;
delete[] alphar;
delete[] alphai;
delete[] beta;
delete[] vl;
delete[] vr;
delete[] eigValueReal;
return info;
}
You are getting an assertion in the heap handling. You can check the mentioned line by viewing the source of dbgheap.c (see at VC<version>\crt\src in your VisualStudio directory).
Such assertion usually indicates a heap corruption.
I have never used igraph but the error is obvious and verified when reading the igraph Reference Manual[^].
igraph_vector_init(&v, 0);
Quote:
This function constructs a vector of the given size and initializes each entry to 0.
VECTOR(v)[0]=v1
Quote:
The simplest way to access an element of a vector is to use the VECTOR macro. This macro can be used both for querying and setting igraph_vector_t elements.
…
Note that there are no range checks right now.
You are creating a vector of the size zero and then try to access elements. This will of course fail because no memory has been allocated on the heap and accessing elements uses probably a NULL
pointer which is catched later by the heap checks.
You have to check first how many elements are in your input file and pass that number to igraph_vector_init()
to allocate the required memory.
Hi! I’ve recently come upon a design bug in VC++2012 runtime library.
The situation: Overrunning a CRT debug heap block by more than 4 bytes.
The problem: When more than 4 bytes are overrun, an ambiguous breakpoint is triggered instead of a detailed message
(buffer was overrun at the end of …).
Example code to duplicate this issue:
/** * By overrunning only the no-mans land after the buffer * The Windows heap isn't actually corrupted, so it works correctly. */ char* mem1 = (char*)malloc(24); memset(mem1 + 24, 0, 4); // overrun 4 bytes (size of no-mans land) free(mem1); // correct assert reported: buffer overrun at the end of buffer /** * By overrunning more than 4 bytes, we actually corrupt the * Windows heap and cause Win32 HeapValidate to fail: */ char* mem2 = (char*)malloc(24); memset(mem2 + 24, 0, 5); // overrun 5 bytes (only 1 byte beyond no-mans land) free(mem2); // ambigous breakpoint is triggered with no info what happened
Since the VC++ team has been very helpful by actually including the source code of the C runtime library, it has been easy to find the source of the problem:
// dbgheap.c : line 1322 /* * If this ASSERT fails, a bad pointer has been passed in. It may be * totally bogus, or it may have been allocated from another heap. * The pointer MUST come from the 'local' heap. */ _ASSERTE(_CrtIsValidHeapPointer(pUserData)); /* get a pointer to memory block header */ pHead = pHdr(pUserData); /* verify block type */ _ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)); /* if we didn't already check entire heap, at least check this object */ if (!(_crtDbgFlag & _CRTDBG_CHECK_ALWAYS_DF)) { /* check no-mans-land gaps */ }
The point that triggers the ambiguous breakpoint is _CrtIsValidHeapPointer, which basically just calls: HeapValidate(_crtheap, 0, pHdr(pUserData)), which validates the current memory block. Of course this Win32 call will trigger an interrupt, since the next
heap block has been corrupted by the overrun. As describe my MSDN documentation on
HeapValidate, the interrupt is triggered if a debugger is attached.
I can also see the reasoning to validate the pointer: To figure out if the pointer actually belongs in the heap. If someone calls free on a totally bogus pointer, you need a way to catch that. On the contrary though, it doesn’t actually display any real
usable information of what went wrong even with that.
Since the following line generates an assertion anyways, the program will crash anyways (!).
_ASSERTE(_CrtIsValidHeapPointer(pUserData));
The solution: I think it would greatly improve the CRT if that function wouldn’t focus on very bogus edge cases that crash anyways. It should focus on the most obvious path (valid heap pointer) first and report an actual useful error message.
After those checks, it can run HeapValidate and crash if it likes:
/* get a pointer to memory block header */ pHead = pHdr(pUserData); /* verify block type */ _ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)); /* if we didn't already check entire heap, at least check this object */ if (!(_crtDbgFlag & _CRTDBG_CHECK_ALWAYS_DF)) { /* check no-mans-land gaps */ } /* catch any other corruption of the heap */ _ASSERTE(_CrtIsValidHeapPointer(pUserData));
This will make the debug heap report a buffer overrun at the end of the memory block and will then proceed to crash. The improvement is that this time an actual message is shown to the programmer.
I hope this bug reaches the VC++ development team and that in future releases the debug code can be improved.
Regards,
Jorma Rebane
Hi,
I encountered the ‘Debug Assertion Failed!’ when trying to build a project with Microsoft Visual C++ 6.0.
It happens when I use std::string and try to print the string out. It says:
Program : temp.exe
File: dbgheap.c
Line: 1011
Expression: _CrtIsValidHeapPointer(pUs
Some things you might want to know about my program before I ask my question. My program builds by running a main application and importing a couple of .lib files that I have created. During the creation of those library files, I did get a «warning C4251» that said that the string class needs and external dll interface.
Is this what’s wrong? But here is something else I found out:
My Run-Time Libraries are as follows:
In Win32 Debug
—Application (which is a Win32 Console Application) —> Debug Single Threaded
—Plugin1 (which is a Dynamic Link Library) —> Debug Multithreaded
—Plugin2 (which is a Dynamic Link Library) —> Debug Multithreaded
In Win32 Release
—Application (which is a Win32 Console Application) —> Single Threaded
—Plugin1 (which is a Dynamic Link Library) —> Multithreaded
—Plugin2 (which is a Dynamic Link Library) —> Multithreaded
From the research I’ve done is that this assertion error occurs when there are 2
different C Runtime libraries being used. Is there a way to work round
this? Like is there a way to specify the correct libraries to be used?
Or any other solutions? or do I need to correct that C4251 warning.
Thanks.