Keywords
Bioinformatics, Data mining, Images, Scientific literature, Text, OCR, PDF, Biomedical
This article is included in the Bioinformatics gateway.
Bioinformatics, Data mining, Images, Scientific literature, Text, OCR, PDF, Biomedical
We present here a revised manuscript striving for more clarity and better presentation of the results including language and style.
The conclusion is revised with the inclusion of discussion of some of the available image based databases, which can directly profit from MSL by fast, automatic and rapid separation of text, and text describing the images.
See the authors' detailed response to the review by Florencio Pazos
See the authors' detailed response to the review by Karin Verspoor
There has been an enormous increase in the amount of the scientific literature in the last decades1. The importance of information retrieval in the scientific community is well known; it plays a vital role in analyzing published data. Most published scientific literature is available in Portable Document Format (PDF), a very common way for exchanging printable documents. This makes it all-important to extract text and figures from the PDF files to implement an efficient Natural Language Processing (NLP) based search application. Unfortunately, PDF is only rich in displaying and printing but requires explicit efforts in the extraction of information, which significantly impacts the search and retrieval capabilities2. Due to this reason several document analysis based tools have been developed for physical and logical document structure analysis of this file type.
The recently, provided basic information retrieval (IR) system by PubMed is efficient in extracting literature based on published text (titles, authors, abstracts, introduction etc.), with the application of automatic term mapping and Boolean operators3. The normal outcome of a successful NLP query brings a maximum of 20 relevant results per page; however, user can improve the search by customizing the query using the provided advanced options. So far, the current PubMed system, as well many other related orthodox NLP approaches are unable to completely implement an efficient information retrieval system, capable of extracting both text and figures from published PDF files. One of the major and technical challenges is the availability of structured text and figures. To our limited knowledge, there still is no single tool available which can efficiently perform both physical and logical structure analysis of all kinds of PDF files and can extract and classify all kinds of information (embedded text from all kinds of biological and scientific published figures). Different commercial and free downloadable software applications provide support in extracting the text and images from PDF files:
A-PDF (http://www.a-pdf.com/image-extractor/),
PDF Merge Split Extract (http://www.pdf-technologies.com/pdf-library-merge-split.aspx),
BePDF (http://haikuarchives.github.io/BePDF/), KPDF (https://kpdf.kde.org),
MuPDF (http://mupdf.com), Xpdf tool (http://www.foolabs.com/xpdf/),
Power PDF (http://www.nuance.com/for-business/imaging-solutions/document-conversion/power-pdf-converter/index.htm)
However, these software applications do not provide text and images in a form where they could be considered for further logical analysis e.g. mining text in reading order from double or multiple columns documents (the text of first column followed by the text of second column, and so on), searching marginal text using key-words, removing irrelevant graphics (e.g. journal, publisher’s logos and header-footer images embedded inside document etc.) and extracting embedded text inside single and multi-panel complex biological images.
So far, the current PubMed system as well many other related orthodox NLP approaches e.g. 4–13, are unable to completely implement an efficient information retrieval system, capable of extracting both text and figures from published PDF files.
To meet the technological objectives of this challenge, we took a step forward in the development of a new user friendly, modular and client based system MSL (software acronym denotes “Mining Scientific Literature”) for the extraction of full and marginal text from PDF files based on the keywords and coordinates (Figure 1). It was build with a product line architecture. Since MSL provides a module for the extraction of figures from PDF files and applies Optical Character Recognizer (OCR) to extract text from all kinds of biomedical and biological Images. MSL comprises three modules working in product-line architecture: Text, Image and OCR (Figure 2). Each module performs its task independently and its output is used as an input for the next module. When a PDF file is input to the MSL, first full and marginal text is automatically extracted, and then images are automatically extracted and placed in the same directory where PDF file is located. Later, user needs to select from all extracted and visible images in the image view, and apply OCR to particular image to extract text.
There are three main components (left bottom square): Text, Image and OCR. A PDF document35 is input and processed by MSL. (left upper square): The text module provides extracted, searched and marginalized text in reading order, and file attributes. The image component provides the preview of extracted images from the document. OCR component provides extracted text from selected and processed image. The output is shown in upper right square and GUI and user options are indicated in the lower right square.
There are three main components: Text, Image and OCR., and nine sub-components (rectangles): Text File, Image File, Visualize Image, PDF File, LEADTOOLS, XML File, iTEXTSharp, Bytescout, Spire. The text component applies iTEXTSharp, Bytescout, Spire to extract the text from PDF document and write output in XML file. The image component applies Spire to extract images from the PDF document and visualize that using Visualize Image. OCR component applied LEADTOOLS to extract text from images and export that to PDF format. Colored arrows denote the information processing flow. The hexagon at the top ….
MSL extracts text and figures from the published scientific literature and helps in analyzing embedded text inside figures. The overall methodological implementation and workflow of the MSL is divided into two processes: (I) Text mining and (II) Image analysis. MSL is a desktop application, designed and developed following the scientific software engineering principles of three-layered Butterfly14 software development model.
Physical and logical document analysis is one of the living challenges. To the best of the authors’ knowledge, there is no solution available which can perform efficient physical and logical structural analysis of PDF files, implement completely correct rendering order and classify text in all possible categories e.g. Tile, Abstract, Headings, Figure Captions, Table Captions, Equations, References, Headers, Footers etc.
However, there are some tools available which are helping in this regard e.g. PDF2HTML towards contextual modeling of logical labelling15, PDF-Analyzer for object level document analysis16, XED for hidden structure analysis2, Dolores for the logical structure analysis and recovery17 automatic conversation from PDF to XML18 and PDF to HTML19 etc.
MSL has enhanced capabilities compared to these tools including Dolores (see comparison below). Thus we developed MSL’s Text module, which is capable of processing PDF files with single, double or multiple columns. It divides the system’s text based output in four sub-modules: full text, marginal text, keyword based extracted text and file attributes. Full text gives the complete text from PDF file, marginal allows user to give the coordinates (Lower Left X, Lower Left Y, Upper Right X and Upper Right Y) and extract the desired portion of the text from the PDF file. The keyword based text allows user to extract the information from PDF file based on keywords and respective coordinates (Left, Top, Width, Height) e.g. if a user is only interested in getting the figure caption or references, this kind of search will be helpful. The last sub module, File attributes gives the information about input file including title, author, creator, producer, subject, creation date, keywords, modified, number of pages and number of figures.
While implementing Text module, we researched and tried different available commercial and freely downloadable libraries with a focus on full text extraction, marginal text extraction, keyword based text extraction and text extraction from embedded images from PDF files. We tried different implemented systems and libraries (Table 1) e.g. iTextSharp Bytescout, Spire PDF Sautinsoft PDF Focus Dynamic PDF, PDFBox, iText PDF, QPDF, PoDoFo, Haru PDF Library, JPedal, SVG Imprint, Glance PDF Tool Kit, BCL SharpPDF etc.
Library Name | Weblink |
---|---|
iTextSharp | (http://sourceforge.net/projects/itextsharp/), |
Bytescout | (https://bytescout.com) |
Spire PDF | (http://www.e-iceblue.com/Introduce/pdf-for-net-introduce.html) |
Sautinsoft PDF Focus | (http://www.sautinsoft.com/products/pdf-focus/) |
Dynamic | (https://www.dynamicpdf.com) |
PDFBox | (https://pdfbox.apache.org) |
iText PDF | (http://itextpdf.com) |
QPDF | (http://qpdf.sourceforge.net) |
PoDoFo | (http://podofo.sourceforge.net) |
Haru PDF Library | (http://libharu.sourceforge.net) |
JPedal | (https://www.idrsolutions.com/jpedal/) |
SVG Imprint | (http://svgimprint-windows.software.informer.com) |
Glance PDF Tool Kit | (http://www.planetpdf.com/forumarchive/53545.asp) |
BCL | (http://www.pdfonline.com/corporate/) |
SharpPDF | (http://sharppdf.sourceforge.net) |
One of the common problems in almost all libraries is merging and mixing of text, using double or multiple columns. Our developed system is the combination of different libraries, useful for different purposes. We have used Spire PDF to remove the Book-marks, iTextSharp for the extraction of full and marginal text, Bytescoute for the keyword based marginalized text search and producing output in the form of XML file (Figure 2). The generated XML file contains structured (tagged) text along with the information about its coordinates (placement in the file), font (Bold, Italic etc.) and size, which can be used for mapping and pattern recognition tasks.
Image-based analysis is a versatile and inherently multiplexed approach as it can quantitatively measure biological images to detect those features, which are not easily detectable by a human eye. Millions of figures have been published in scientific literature that includes information about results obtained from different biological and medicinal experiments. Several data and image mining solutions have been already implemented, published and are in use in the last 15 years20. Some of the mainstream approaches are towards the analysis of all kinds of images (flow charts, experimental images, models, geometrical shapes, graphs, images of thing or objects, mixed etc.). There are not many approaches proposed for specific kinds of image-analysis e.g. towards the identification and quantification of cell phenotypes21, prediction of subcellular localization of proteins in various organism22, analysis of gel diagrams23, mining and integration of pathway diagrams24.
While implementing a new data-mining tool, one of our goals was to extract images from published scientific literature and try to extract embedded text as well. We analyzed different freely available and commercial OCR systems and libraries including Aspose, PUMA, Microsoft OCR, Tesseract, LEADTOOLS, Nicomsoft OCR, MeOCR OCR, OmniPage, ABBYY, Bytescout claiming to be able to extract embedded text from figures. During our research we found LEADTOOLS (Figure 2) as one of the best available solutions for this purpose. MSL is capable of automatically extracting images from the PDF files and allowing the user to apply OCR to any extracted image by clicking and enlarging it for a better view (using Windows default image viewer).
We tested MSL with similar parameters on randomly selected scientific manuscripts (ten PDF files) from different open access (F1000Research, Frontiers, PLOS, Hindawi, PeerJ, BMC) and restricted access (Oxford University Press, Springers, Emerald, Bentham Science, ACM) publishers, including some of the authors’ published papers, details are given in Table 2. While testing MSL on the selected manuscripts, we observed best overall performance for the manuscripts25,26–30, with satisfactory results from almost all publishers (including Oxford University Press, BMC, Frontiers, PeerJ, Bentham Science, ACM) in terms of both extracting text in reading order and extracting images. An observed poor performance involved manuscripts from PLOS31, Hindawi32, F1000Research33 and IEEE34 publishers. Here, in the case of text extraction we observed that the text was in reading order when using manuscripts from F1000Research and IEEE but text was without spaces in the manuscript from PLOS and with additional lines and extra spaces in the manuscript from Hindawi. In the case of figure extraction we observed one common problem among the four manuscripts from these publishers; along with the manuscript images (Figures), embedded journal or publishers’ logos and images were also extracted. Additionally, while analyzing the manuscript from F1000Research, we observed that the images were broken into many pieces and it was not possible to find one single complete image. As we did not test all manuscripts from the mentioned publishers, we cannot claim that the results will be the same for all papers from a publisher, as the output may vary in different papers. Our observed results using MSL are given in attached supplementary material (Supplementary Table S1 and Dataset 1).
Publishers | Manuscript |
---|---|
F1000-Research | Ant-App-DB: a smart solution for monitoring arthropods activities, experimental data management and solar calculations without GPS in behavioral field studies33. |
PLOS | The Genomic Aftermath of Hybridization in the Opportunistic Pathogen Candida metapsilosis31. |
Hindawi | Mathematical Properties of the Hyperbolicity of Circulant Networks32. |
IEEE | Design implementation of I-SOAS IPM for advanced product data management34. |
BMC | Software LS-MIDA for efficient mass isotopomer distribution analysis in metabolic modeling26. |
PeerJ | Anvi’o: an advanced analysis and visualization platform for ‘omics data27. |
Frontiers | Ontology-based approach for in vivo human connectomics: the medial Brodmann area 6 case study28. |
ACM | Intelligent semantic oriented agent based search (I-SOAS)29. |
Bentham Science | DroLIGHT-2: Real Time Embedded and Data Management System for Synchronizing Circadian Clock to the Light-Dark Cycles30. |
Oxford University Press | Bioimaging-based detection of mislocalized proteins in human cancers by semi- supervised learning25. |
To apply MSL, published scientific literature has first to be downloaded in the form of a PDF file, from any published source. The validation process using MSL consists of three major steps: 1) Text mining, 2) Image extraction, and 3) Application of OCR to extract text from selected images as shown in Figure 1, following the implemented workflow as shown in Figure 2. Example results and graphics are shown in Figure 1, Figure 3 and Figure 4. Representation includes the extraction of text and images from one of the randomly selected papers35, and application of OCR to one of the extracted images from another randomly picked publication25.
A figure (shown as three panels; including two charts, one image and a table) is analyzed (example from ref. 25). OCR (LEADTOOLS) is applied to extract and report the text from the figure in two ways (red stippled lines): simple text form (symbolized by …..section: Extracted Text from Figure) as well as in PDF file (rectangle) with similar margins to the original figure (section: Exported text in PDF format). Steps involved are document image analysis, text extraction and PDF conversion.
The scanned image based page of one of the randomly selected papers35 is processed using OCR (LEADTOOLS; blue arrows): Text is extracted from the image and a new PDF is generated (rectangles), which is based on the text, placed with similar margins to the image file. Steps involved are again document image analysis, text extraction and PDF conversion.
Figure 1 shows that one randomly selected published article’s PDF file35 is inputted to the MSL’s text, the extracted text is divided into three categories (i) complete text in excellent rendering order (ii) marginalized text and (iii) keyword based searched text. Two figures (Figure 1 and Figure 2) are extracted and displayed in the image section, and one of those is selected to apply OCR. The applied OCR extracts textual information, which is displayed in and can be exported in a PDF file.
To further validate the application of OCR and discuss different results, Figure 3 show another example of embedded text extraction from a complex figure36, which includes three panels of images (i) colorful pie and circle charts, (ii) biological images and (iii) tabular information. Similar to our prior application of OCR, results are displayed in textual form as well as generated PDF file of extracted text. A noticeable difference between both outputs is that the textual information is presented in line-by-line order whereas in the PDF file the information is displayed in margins with respect to the original image.
The last resultant example is based on the validation of MSL by extracting the textual information from image based PDF files. We produced an image form of one of the randomly selected article25 and then processed one of pages. As Figure 4 shows, the obtained results were comprehensive in both textual as well as the PDF form. This kind of textual extraction can be very helpful, especially when the literature is available in only images e.g. in the case of old published literature in print only format but electronically available in scanned form. MSL produces several files as system output in the parent folder of the files. These files are: XML files (which include structured or tagged information), an Images File (extracted from the PDF file) and PDF files for all analyzed images using OCR.
We mentioned earlier that we have tried and implemented different libraries for text and image extraction and analysis. The best text based outcome was observed using iTextSharp, better image extraction was observed using Spire and OCR from LEADTOOLS was the most promising. While validating the implemented solution, other than the expected results (text and images), we observed some limitations in the used libraries: Irrelevant images are also automatically extracted e.g. journal logos, publisher’s logos and header-footer images embedded inside document (e.g. images added by the publishers, to provide publishing details). However, these images are easily recognized by the user and can also be automatically removed if desired as e.g. always referring to the same logo.
Furthermore, text was not always in good rendering order, especially when there were text-based mathematical equations with super and subscripts; and in case of double or multicolumn PDF files, most of the libraries’ rendering order is not correct. During extracting text, we found that some important symbols were missed and spaces were generated for some paragraphs. We found that it was not possible to extract particular images that are created as a combination of different sub-images and text objects in the manuscript. In these cases, text is found in extracted text area and all extracted sub-images are image sections, with the possibility of missing some sub-images as well. Moreover, when we applied OCR to different images (extracted or loaded), we found that its performance does vary with respect to the complexity of inputted images. In case of special characters (e.g. Greek delta, alpha, beta etc.), it does not perform well unless these are hard wired in the software.
In comparison to earlier mentioned tools; MSL possess some advantages as well as limitations. For instance, Dolores helps the user in adding custom tags to the PDF document and create a semantic model associated to the processed class of documents, PDF2HTML implements conditional random fields (CRF) based model to learn semantics from processed PDF page’s content, PDF-Analyzer devised a model based on rectangular objects for the analysis on PDF documents, XED applies method to combine PDF symbol analysis with traditional document image processing technique. MSL does not apply any of these methods and support such features. However, beyond the capabilities of the above tools, MSL does support marginalization of text, provides text in correct reading order, enable users with keywords based search and provide extraction of embedded text from figures (using OCR), which none of these tools does. To enhance the functionality of the MSL program (e.g. our standard version available here for download), we give a table of the most often used special symbols in biomedical literature (Table 3). Depending on your application in mind, you thus simply extend the MSL parser by considering also these special characters occurring often in your texts.
Number | Special Symbols | Name |
---|---|---|
1 | Δ | Delta |
2 | α | Alpha |
3 | β | Beta |
4 | ϕ | Phi |
MSL architecture is based on the Product Line Architecture (PLA) and Multi-Document Interface (MDI) developmental principles, and it is designed and developed (using C-Sharp programming language, Microsoft Dot NET Framework) following the key principles of Butterfly paradigm14,36. The work-flow of MSL is divided into two processes: (I) extraction and marginalization of text with respect to the division and placement of text in PDF file and keyword based search by using the iTextSharp, Bytescoute, Spire PDF libraries, and (II) extraction and analysis of figures by using the Spire PDF library and LEADTOOLS OCR.
It takes Portable Document Format (PDF) based literature files as input, performs partial physical structure analysis, and exports output in different formats e.g. text, images and XML files. It allows user to extract keywords and marginal (X and Y coordinates) information based text, have PDF file’s metadata information (title, author, creator, producer, subject, creation date, keywords, modified, number of pages and number of figures) and save extracted full and marginal text in text files.
Biomedical image extraction and analysis is one of the most complex tasks from the field of computer sciences and image analysis. Some of the mainstream approaches37–42 have been proposed towards the analysis of all kinds of images (e.g. flow charts, experimental images, models, geometrical shapes, graphs, image-of-thing, mix etc.). MSL allows user to automatically extracting images from the PDF files, let any selected image viewed via Windows default image viewer and apply implemented OCR. Other than extract images from PDF file, MSL allow user to load any image, apply OCR and export output in readable PDF file.
MSL produces several out files in the parent folder including XML files (which include structured or tagged information), Images File (extracted from PDF file) and PDF files for all analyzed images using OCR (Figure 5).
This figure shows different files generated during analysis of PDF document. PDF file (top, left) is the actual document, XML file is the structured (tagged) form of extracted text (top, middle), a second PDF file (top, right) is the extracted text from image (see Figure 3) and all other files are extracted image from the original PDF document.
MSL application is very simple to install and use. It was tested and can be well configured on a Microsoft Windows platform (preferred OS version: 7). MSL follows a simple six steps installation process (Figure 6). After installation, it can be run by either clicking on the installed application’s icon at the desktop or execute application following sequence of steps: Start → All Programs → MSL 1.0.0 → MSL.
Squares indicate steps, the blue stippled line the process.
Regarding using the MSL application, one important point to remember is that it is based on different PDF text extraction, marginalization and figure extraction libraries, which are automatically configured during installation but used OCR by the LEADTOOLS is not a freely available library, which we have used upon academic research (free) license. The OCR library is also automatically configured during installation but its performance at different (non-licensed) machines is not confirmed. Moreover, the recommended display screen resolution size is 1680×1050 with landscape orientation.
The development of a virtual research environment to store and link molecular data, can be well achieved and established if first the mixture of text, protocols and omics data is properly separated from images, figures and figure legends – a task for which our tool can be well suited. There are a number of databases (e.g. Alzheimer’s Disease Neuroimaging Initiative (ADNI); Breast Cancer Digital Repository (BCDR); BiMed; Public Image Databases; Cancer Image Database (caIMAGE); COllaborative Informatics and Neuroimaging Suite (COINS); DrumPID; Digital Database for Screening Mammography (DDSM); Electron Microscopy Data Bank (EMDB); LONI image data archive; Mammography Image Databases (MID); New Database Provides Millions of Biomedical Images; Open Access Series of Imaging Studies (OASIS); Stanford Tissue Microarray Database (TMA); STRING; The Cancer Imaging Archive (TCIA); Whitney Imaging Center etc.) which can directly profit from MSL by fast, automatic and rapid separation of text and text description from images and figure legends describing the images is important for further improvement of the database and its content. One in-house example is the DrumPID database43, where different types of data and images are warehoused by us and an improved separation and retrieval of text versus figure legends, image descriptions etc. is highly useful and currently applied.
The latest available and easy to use version of MSL has been tested and validated in-house. The advancements in information retrieval techniques for text and figure analysis combined with this sophisticated computational tool can support various studies.
F1000Research: Dataset 1. Extracted Images and Text from Papers tested using MSL, 10.5256/f1000research.7329.d10873944
The software executable is freely available at the following web link: https://zenodo.org/record/30941#.Vi0PtmC5LHM
The software download section provides one executable: MSL, setup to be installed on the Microsoft Windows platform.
MSL has been NOT been developed for any commercial purposes but as a non-commercial prototype application for academic research, analysis and development purposes.
Mining Scientific Literature (MSL) Ver 1.0.0 (DOI: 10.5281/zenodo.30941).
ZA: developed the complete solution (including research, software designing, programming, testing, deployment and technical documentation). TD guided the study. All authors participated in writing of the manuscript and approved the final manuscript for publication.
This work was supported by a German Research Foundation grant (DFG-TR34/Z1) to TD.
I confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
We thank the German Research Foundation (DFG-TR34/Z1) for support. We would like to thank Dr. Chunguang Liang (University of Wuerzburg, Germany) for his help in testing MSL and all interested colleagues for critical community input on the approach and anonymous reviewers for their helpful comments.
We would like to thank all the open source, licensed and commercial library providers, for their help in this non-commercial and academic research and software development.
Supplementary Table S1. List of Papers (PDF files) tested using MSL.
Supplementary which gives the list of some of those manuscripts from different publishers (F1000Research, PLOS, Hindawi, IEEE, BMC, PeerJ, Frontiers, ACM, Bentham Science and Oxford University Press), which have been used for testing and validating the MSL application. The attached table provides the information about some of the extracted images and observed full and marginal text.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for developing the new software tool clearly explained?
Partly
Is the description of the software tool technically sound?
Partly
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Yes
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
No
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
References
1. Ramakrishnan C, Patnia A, Hovy E, Burns GA: Layout-aware text extraction from full-text PDF of scientific articles.Source Code Biol Med. 2012; 7 (1): 7 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Is the rationale for developing the new software tool clearly explained?
Partly
Is the description of the software tool technically sound?
Partly
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Yes
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
Competing Interests: No competing interests were disclosed.
References
1. Clark C, Divvala S: Looking beyond text: Extracting figures, tables and captions from computer science papers. AAAI 2015 Workshop on Scholarly Big Data. 2015. Reference SourceCompeting Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||||
---|---|---|---|---|
1 | 2 | 3 | 4 | |
Version 3 (revision) 04 Apr 18 |
read | read | read | |
Version 2 (revision) 12 Apr 17 |
read | read | ||
Version 1 16 Dec 15 |
read | read |
Click here to access the data.
Spreadsheet data files may not format correctly if your computer is using different default delimiters (symbols used to separate values into separate cells) - a spreadsheet created in one region is sometimes misinterpreted by computers in other regions. You can change the regional settings on your computer so that the spreadsheet can be interpreted correctly.
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)