You are on page 1of 6

Volume 8, Issue 7, July – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

A Survey on Sentence Level Clustering Techniques


1
M. Divya, 2 Dr. S. Sukumaran
1
Ph.D Research Scholar, 2 Associate Professor
1,2
Erode Arts and Science College, Erode, Tamilnadu, India

Abstract:- Information Mining is characterized as change starting with one bunch then onto the next. [ 5]
removing the data from the colossal arrangement of Bunching, bunches comparative information protests
information. Bunching is the method involved with together and assists with finding the secret comparability,
gathering or accumulating of information things. fundamental ideas and sums up a lot of text into gatherings.
Sentence bunching essentially utilized in assortment of The greater part of the records contain between related
utilizations, for example, order and arrangement of subjects or terms and a large number are between related in
archives, programmed synopsis age, putting together the a degree to these. [6]
records. Grouping strategies is fundamental in the
information mining cycle to uncover normal designs and  Sentence Clustering
distinguish fascinating examples with regards to the Sentence grouping assumes a significant part in
hidden information. This is significant in spaces, for numerous message handling exercises. For instance,
example, sentence bunching, since a sentence is probably different creators have contended that integrating sentence
going to be connected with more than one subject or grouping into extractive multi-archive synopsis keeps away
point present inside a report or set of records. The from issues of content cross-over, prompting better
sentence grouping is vital in some use of message mining, inclusion. Be that as it may, sentence bunching can likewise
for example, single and records outline in which a be utilized inside broader message mining errands. [ 2]
sentence is chosen in view of data commitment by
sentence scoring. The record's sentences are semantically By bunching the sentences of those reports we would
completely related or some level of covering exists among instinctively expect no less than one of the groups to be
different sentences. firmly connected with the ideas depicted by the inquiry
terms; be that as it may, different groups might contain data
Keywords:- Sentence Level, Clustering, Text Mining. relating to the question somehow or another until recently
obscure to us, and in such a case we would have effectively
I. INTRODUCTION mined new data. [7]

Information mining is the act of naturally looking II. RELATED WORKS


through enormous stores of information to find examples
and patterns that go past straightforward investigation. Lovedeep Singh et al [1]. Grouping Text has been a
Information taking out is called data disclosure in significant issue in the area of Normal Language Handling.
information. It is the extraction of concealed prescient data While there are strategies to group message in light of
from huge data sets, is a strong new innovation with utilizing traditional bunching procedures on top of relevant
extraordinary potential to assist organizations with zeroing or non-context oriented vector space portrayals, it actually
in on the main data in their information stockrooms. stays a common area of examination conceivable to different
Information mining is achieved by building models. A upgrades in execution and execution of these methods.
model plays out certain activities on information in view of Consideration Components have shown to be exceptionally
some calculation. The idea of programmed revelation compelling in different NLP undertakings as of late.
alludes to the execution of information mining models.
Information mining strategies can be isolated into Alvin Subakti et al [2] Text bunching is the
administered or solo. Bunching is one of the unaided undertaking of collection a bunch of texts with the goal that
procedures. text in a similar gathering will be more comparative than
those from an alternate gathering. The method involved with
 Clustering gathering text physically calls for a lot of investment and
Bunching is the most common way of collection a work. Bidirectional Encoder Representation from
bunch of articles so that article in a similar gathering are Transformers (BERT) model can deliver message portrayal
more like each other than those in other cluster.[1] Bunching that consolidates the position and setting of a word in a
is the process concerned through conglomerating of sentence. This audit assessed broke down the exhibition of
information things. Sentence grouping essentially utilized in the BERT model as information portrayal for text. To look
assortment of uses, for example, characterize and at the exhibitions of BERT, we utilize four bunching
classification of reports, programmed synopsis age, calculations, i.e., k-implies grouping, eigenspace-based
coordinating the archives, and so on. In message handling, fluffy c-implies, profound inserted grouping, and worked on
sentence grouping assumes an essential part this is utilized profound installed bunching.
in message mining exercises. Size of the groups might

IJISRT23JUL1558 www.ijisrt.com 2178


Volume 8, Issue 7, July – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Renchu Guan, Hao Zhang et al [3] Text bunching is a such text information. The paper presents a novel
basic move toward text information examination and has Hierarchical Fuzzy Relational Eigenvector Centrality-based
been widely concentrated on by the text mining local area. Clustering(HFRECC) Calculation which is expansion of
Most existing text bunching calculations depend on the sack FRECCA Calculation. It tackles the issues like intricacy,
of-words model, which faces the high-layered and sparsity responsiveness and alterability of bunches and is helpful for
issues and overlooks text primary and succession data. normal language record and works in Assumption
Profound learning-based models, for example, convolutional Expansion System and is fit to recognize covering groups.
brain organizations and repetitive brain networks see texts as The calculation utilizes diagram portrayal of information
successions yet need managed signals and logical outcomes. and deals with social information gave viz., information in
A deep future based text clustering (DFTC) structure that pair wise similitudes among information objects.
consolidates pretrained text encoders into text bunching
errands. This model, which depends on succession Binyu Wang, Wenfen Liu et al[7] Text bunching is a
portrayals, breaks the reliance on management. significant strategy for successfully sorting out, summing
up, and exploring text data. In any case, without even a trace
G.Nivetha and K.S.Gunavathy et al[4] The sentence of marks, the message information to be grouped can't be
grouping is vital in some utilization of message mining, for utilized to prepare the message portrayal model in view of
example, single and records synopsis in which a sentence is profound learning. To resolve the issue, a calculation of
chosen in light of data commitment by sentence scoring. The message grouping in view of profound portrayal learning is
record's sentences are semantically completely related or method utilizing the exchange learning area variation and
some level of covering exists among different sentences. the boundaries update during bunch cycle. This method goes
This audit presents, grouping calculation uses sentence about as an initialisation of the model boundaries. At last,
covering (connection) with regards to fluffy social the text highlight vectors acquired by the model are bunched
estimations. A sentence participation esteem estimated from with MCSKM++ calculation. The calculation not just
boundary likelihood circulation (assumption boost) method. purposes the model pre-preparing issue in solo bunching, yet
A sentence has an enrollment worth to each bunch. The in addition meaningfully affects the exchange issue brought
worth alludes level of connection between the sentence and about by various quantities of area names.
group. Group centroid refreshed with participation values.
Group limit is fluffy. ( Sentence can have part to more than Muhammad Mateen, Junhao Wen, Sun Song et al [8]
one bunch) So that, each centroid's refreshed proposonal to Grouping is being utilized in various fields of exploration,
the enrollment esteems all the while. The bunching quality including information mining, scientific categorization,
result not relies upon introductory group's centroid esteem. archive recovery, picture division, design order. Text
The group yield is reliable. Indeed, even after each bunching is a procedure through which text/records are
execution. The bunch quality will be moved along. partitioned into a specific number of gatherings, so text
inside each gathering is connected in contents. In the field of
Sarika S. Musale and Jyoti Deshmukh et al[5] data recovery, text grouping is a significant area of
Bunching is the most common way of collection or examination to arrange and figure out the unstructured
conglomerating of information things. Sentence grouping printed information. In this examination, the troupe
essentially utilized in assortment of uses, for example, grouping strategy is researched. The outfit bunching
characterize and classification of reports, programmed depends on k-implies, agglomerative, fluffy c-implies, k-
synopsis age, coordinating the archives, and so on. In medoid, and Gustafson Kessel grouping and has gotten
message handling, sentence grouping assumes an essential different grouping results independently of a particular
part this is utilized in message mining exercises. Size of the information; seen that all results were not quite the same as
groups might change starting with one bunch then onto the one another's. These cycles are utilized for the quality and
next. The customary grouping calculations have a few issues execution of bunching calculations, and these stages are
in bunching the information dataset. The issues, for important to finish the grouping calculation.
example, insecurity of groups, intricacy and awareness. To
conquer the downsides of these bunching calculations, this Majid Hameed Ahmed and Sabrina Tiun et al [9] the
paper proposes a calculation called Fuzzy Relational quantity of internet based archives has quickly developed,
Eigenvector Centrality-based Clustering Algorithm and with the extension of the Internet, record examination,
(FRECCA) is utilized for the grouping of sentences. In this or message investigation, has turned into a fundamental
calculation single article might have a place with more than errand for getting ready, putting away, picturing and mining
one bunch. reports. Short text clustering (STC) has turned into a basic
errand for naturally gathering different unlabeled texts into
Mujawar Nilofar Shabbir and Prof. Amrit Priyadarshi significant groups. STC is a fundamental stage in numerous
et al[6] Message Handling is fundamental to coordinate applications, including Twitter personalization, feeling
information or to remove needful data from a stack of examination, spam sifting, client surveys and numerous
accessible Enormous Information. Sentence grouping is one other interpersonal organization related applications. The
of the cycles utilized in Message mining task. Text record regular language handling research local area has focused on
might contain Progressive design which connect with more STC and endeavored to conquer the issues of meager
than one topic at an equivalent time. Thus Various leveled condition, dimensionality, and absence of data. We
Fluffy Bunching Calculation can be utilized for grouping completely audit different STC approaches proposed in the

IJISRT23JUL1558 www.ijisrt.com 2179


Volume 8, Issue 7, July – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
writing. Giving bits of knowledge into the mechanical part Sumit Mayani, Saket Swarndeep et al [14] this
ought to help scientists in recognizing the potential examination zeroed in on text archive which are containing
outcomes and difficulties confronting STC. To acquire such of likenesses word. The mix of two calculation techniques,
experiences, we survey different writing, diaries, and worked on k-implies and conventional kmeans calculation
scholastic papers zeroing in on STC methods. use to working on nature of starting group places. The
proposed framework will assist in working on the text with
Deepika U. Shevatkar and V.K.Bhusari et al [10] most archiving bunching with MiniBatchKMeans calculation by
sentence closeness measures don't address sentences in a expanding its precision and by lessening the surprising
typical measurement space, regular fluffy grouping information the proposed framework is working with
approaches in view of models or combinations of Gaussians consolidated stacking approach which is very profitable for
are by and large not material to sentence bunching. This working on the exactness. The Euclidean distance strategy
survey presents an original fluffy bunching calculation that as difference measure, figure the distance between each sets,
works on social info information, i.e., information as a everything being equal.
square lattice of pair wise likenesses between information
objects. The calculation utilizes a diagram portrayal of the Qing Yin, ZhihuaWang et al [15] Deep Embedding
information and works in an Assumption Expansion system Clustering (DEC) based short text grouping models are
wherein the chart centrality of an item in the chart is being created. In these works, dormant portrayal learning
deciphered as probability. Consequences of applying the and message grouping are performed all the while, which
calculation to sentence bunching undertakings show that the can create unimportant portrayals. An original DEC model,
calculation is equipped for recognizing covering groups of which we named the profound inserted bunching model with
semantically related sentences, and that it is in this manner group level portrayal learning (DECCRL) to mutually learn
of possible use in an assortment of message mining errands. bunch and example level portrayals. The proposed model is
Sergios Gerakidis et al [11] The K-Means calculation supposed to be generalizable to address different text
and the Hierarchical Agglomerative Grouping (HAC) bunching difficulties, not just restricted to short texts.
calculation are two of the most known and ordinarily
utilized bunching calculations; the previous because of its Vivek Mehta et al [16] Text grouping is a major
low time cost and the last option because of its exactness. In information mining strategy to perform order, subject
any case, even the utilization of K-Means in text bunching extraction, and data recovery. Text based datasets,
over huge scope assortments can prompt unsuitable time particularly which contain countless archives are inadequate
costs. In this survey address probably the most important and have high dimensionality. In this survey, a grouping
methodologies for report grouping over such 'enormous procedure particularly reasonable to huge text datasets is
information' assortments. recommended that defeat these impediments. The proposed
procedure depends on word embeddings got from a new
Chaman Lal, Awais Ahmed et al [12] This survey profound learning model named "Bidirectional Encoders
separated different qualities, including stop-words, Portrayals utilizing Transformers". The proposed strategy is
stemming, corpus tokenization, commotion evacuation, and named as WEClustering. The proposed strategy manages the
TF-IDF highlights from the hymn, and the bunching was issue of high dimensionality in a successful way; thus, more
directed utilizing the K-Means calculation. The outcomes exact bunches are shaped.
show that a grouping methodology matched with a K-mean
bunching calculation with TF-IDF highlights has proactively Supakpong Jinarat, Bundit Manaskasemsak et al [17]
been utilized. Text bunching is an interaction that worry the another bunching procedure, called word semantic diagram
utilization of (NLP) and grouping calculations. This method grouping, in view of the utilization of text ideas. To apply
of distinguishing bunches in unstructured texts can be the word implanting model from Word2Vec to catch the
utilized in different applications, including criticism semantic importance of words and later build semantic
examination, concentrate on division, etc. Many elements subgraphs in which those words addressed as vertices are
influence K-mean outcomes, including distance estimation, associated by a few high semantic similitudes. At last, short
centrodes introductory positions, and gathering examination. text reports will be relegated to a similar bunch on the off
chance that they contain no less than single word having a
Rafael Gallardo Garcıa, Beatriz Beltran et al [13] this place with the equivalent semantic subgraph.
survey is to investigate the presentation and precision of a
few bunching calculations in message grouping errands. The Shaohan Huang, Furu Wei et al [18] Adjusting with
text preprocessing was acknowledged by involving the Term pre-prepared language models (for example BERT) has
Recurrence - Reverse Record Recurrence to get loads for made extraordinary progress in numerous language grasping
each word in every text and afterward get loads for every assignments in directed settings (for example text
text. The Cosine Likeness was utilized as the closeness arrangement). In this survey, to propose an original
measure between the texts. The bunching assignments were technique to tweak pre-prepared models solo for message
acknowledged over the Skillet dataset and three unique bunching, which at the same time learns message portrayals
calculations were utilized: Partiality Proliferation, K-Means and group tasks utilizing a bunching focused misfortune. It
and Phantom Grouping. gives a technique to use pre-prepared model for text
grouping. Exploratory outcomes show that our model
accomplishes the cutting edge execution.

IJISRT23JUL1558 www.ijisrt.com 2180


Volume 8, Issue 7, July – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
calculation (Hash) that is utilized to gather and group
Nahrain A. Swidan et al [19] The point of this audit is information for the best exactness.
to find an effective calculation for web news mining with
examination of web news information utilizing information Mehdi Allahyari et al [20] how much text that is
grouping and order strategies in light of profound learning, produced consistently is expanding decisively. Text mining
as well as to assess the most ideal way to utilize webpage is the errand of removing significant data from text, which
news data calculations contrasted with different advances, has acquired huge considerations lately. K-implies grouping
and to survey the dependability of the web news data sets is one the dividing calculations which is generally utilized in
that are utilized as devices and methods for information the information mining. There is a famous programmed
mining. In this review, we utilized a compelling Hash assessment strategy for text bunching.

Table 1 Summary of Related works on sentence clustering techniques


Author, Year Method Merits Demerits
Lovedeep Singh 2021 et Natural Language It is feasible to different upgrades common measurement for some
al [1] Processing (NLP) in execution and execution of these sorts of information logical issues.
strategies.
Alvin Subakti 2022 et al Bidirectional Encoder It can create message portrayal that Model is costly and requires more
[2] Representation from consolidates the position and calculation
Transformers (BERT) setting of a word in a sentence.
deep feature-based text fuzzy clustering algorithm It can assist clients with faces the high-layered and sparsity
clustering (DFTC) 2022 understanding the significance and issues and disregards text
et al [3] nature of the grouping results underlying and succession data
G.Nivetha and fuzzy relational The bunching quality result not the calculation can't be utilized
K.S.Gunavathy 2018 et measurements relies upon introductory group's inside more broad text mining
al[4] centroid esteem. The group yield is settings, for example, question
predictable. Indeed, even after each coordinated text mining.
execution. The group quality will
be gotten to the next level.
Sarika S. Musale and Fuzzy Relational In this calculation single article To keep the issue determination
Jyoti Deshmukh 2016 et Eigenvector Centrality- might have a place with more than compact, we can accept that the
al[5] based Clustering one group. quantity of bunches is given as
Algorithm (FRECCA) foundation information.
Mujawar Nilofar Shabbir Hierarchical Fuzzy It can take care of the issues of Can't hold really great for sentence
and Prof. Amrit Relational Eigenvector intricacy, awareness and level messages or short message
Priyadarshi 2016 et al[6] Centrality-based alterability of groups. sections. Subsequently to tackle
Clustering (HFRECC) this issue at sentence level
Algorithm, Natural
Language Processing
(NLP)
MCSKM++ algorithm novel fuzzy clustering successfully sorting out, summing Taking more datasets to further
2018 et al[7] algorithm up, and exploring text data develop grouping quality with half
and half calculation.
Muhammad Mateen, k-means, agglomerative, These processes are used for the Every cluster is predicted by
Junhao Wen, Sun Song fuzzy c-means, k-medoid, quality and performance of considering the clustering labels in
2018 et al [8] and Gustafson Kessel clustering algorithms, and these the internal ensemble through an
clustering stages are necessary to complete entropic principle.
the clustering algorithm.
Majid Hameed Ahmed Short text clustering Short message portrayal and keep Utilizing layered decrease is a
and Sabrina Tiun 2022 et (STC) away from unfortunate grouping fundamental stage in STC to
al [9] exactness. manage time and memory
intricacy.
Deepika U. Shevatkar Novel fuzzy clustering the calculation is equipped for To give a positive thinker
and V.K.Bhusari 2014 et algorithm distinguishing covering groups of arrangement in view of their
al [10] semantically related sentences adequacy to confront the
difficulties of the issue.
Sergios Gerakidis 2021 K-Means algorithm and its low time cost and the last option The issue of finding the base
et al [10] the Hierarchical because of its exactness crossing tree in a total chart
Agglomerative Clustering prompted by the info set of
(HAC) algorithm information.

IJISRT23JUL1558 www.ijisrt.com 2181


Volume 8, Issue 7, July – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Chaman Lal, Awais K-Means algorithm This procedure of recognizing The issue of information is tackled
Ahmed 2021 et al [12] groups in unstructured texts can be by communicating a connection
utilized in different applications. that evaluates the similitude, or
uniqueness, degree between sets of
items.
Rafael Gallardo Garcıa, K-Means and Spectral To create the exhibition and more It cannot deal with boisterous
Beatriz Beltran 2020 et Clustering precision of a few grouping information and anomalies.
al [13] calculation Recognizing bunches with non-
raised shapes isn't reasonable.
Sumit Mayani, Saket MiniBatchKMeans To working on nature of beginning To upgrade and work on the
Swarndeep 2020 et al algorithm group places delicate imperfections of
[14] disengaged pieces of information
by essential K-implies calculation.
Qing Yin, ZhihuaWang Deep Embedding To be generalizable to address Now and again it can create
2022 et al [15] Clustering (DEC) different text grouping difficulties, useless portrayals
not just restricted to short texts.
Vivek Mehta 2021 et al WEClustering A viable way, thus, more exact The issue of high dimensionality
[16] bunches are framed the words are not considered.
Supakpong Jinarat, Word2Vec Taking more datasets to further It considers just well known words
Bundit Manaskasemsak develop bunching quality with or expressions to bunch short texts
2018 et al [17] mixture calculation. are wasteful because of the issue
of sparsity.
Shaohan Huang, Furu Bidirectional Encoder This model accomplishes the This technique is executed on
Wei 2020 et al [18] Representation from cutting edge execution multi-center central processor. It
Transformers (BERT) tends to be additionally applied to
ghastly and improvement.
Nahrain A. Swidan 2020 Hash algorithm (Hash) It is utilized to gather and order Web Administrations grouping
et al [19] information for the best exactness technique is working with fluffy
bunching in different practical
necessities.
Mehdi Allahyari 2017 et K-means clustering According to investigate both It expects to determine the
al [20] calculation last result is to further quantity of groups (k) ahead of
develop time utilization and the time.
further develop time utilization of
moved along.

Table 1, summarizes the Related works on sentence REFERENCES


clustering techniques and merits and demerits are reviewed.
[1]. Purushothaman B, "Clustering performance in sentence
III. CONCLUSION using fuzzy relational clustering algorithm", Volume
No.03, Special Issue No. 02, February 2015.
In view of this review, different sentence [2]. Jinto Jacob, "A Survey on Techniques used for
characterization strategies have been related to their Sentence Clustering of Text Documents", I JRAS ET,
benefits, and negative marks in separating information from ISSN: 2321-9653, Vol. 2 Issue VI, June 2014.
information. The different grouping procedures and models [3]. Deepika U. Shevatkar and V.K.Bhusari, "Clustering
are will serve to further developing the sentence bunching Sentence-Level Text Using a Hierarchical Fuzzy
by expanding its exactness and by decreasing the strange Relational Clustering Algorithm", IJCSMC, Vol. 3,
information. A decent bunching of text requires successful Issue. 12, December 2014, pg.11 – 15.
element determination and a legitimate decision of the [4]. G.Nivetha and K.S.Gunavathy, "Clustering text in
calculation for the job needing to be done. It is seen from the sentence level", JETIR August 2018, Volume 5, Issue
above investigation that different sentence grouping 8, 2018.
procedures give huge execution is examined here. This [5]. Sarika S. Musale and Jyoti Deshmukh, "Sentence level
paper attempts to reveal insight into the less investigated text clustering using a fuzzy relational clustering
potential outcomes in the grouping field. algorithm", Volume 05, Issue 02, 2016.
[6]. Mujawar Nilofar Shabbir and Prof. Amrit Priyadarshi,
"Clustering Sentence Level Text using Hierarchical
FRECCA Algorithm", International Journal of
Advanced Research in Computer and Communication
Engineering Vol. 5, Issue 6, June 2016.

IJISRT23JUL1558 www.ijisrt.com 2182


Volume 8, Issue 7, July – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[7]. Andrew Skabar and Khaled Abdalgader, "Clustering
Sentence-Level Text Using a Novel Fuzzy Relational
Clustering Algorithm", IEEE transactions on
knowledge and data engineering, VOL. 25, NO. 1,
JANUARY 2013.
[8]. Muhammad Mateen, Junhao Wen, Sun Song, "Text
Clustering using Ensemble Clustering Technique",
(IJACSA) International Journal of Advanced Computer
Science and Applications, Vol. 9, No. 9, 2018.
[9]. Majid Hameed Ahmed and Sabrina Tiun, "Short Text
Clustering Algorithms, Application and Challenges: A
Survey", 2022.
[10]. Deepika U. Shevatkar and V.K.Bhusari, "Clustering
Sentence-Level Text Using a Hierarchical Fuzzy
Relational Clustering Algorithm", IJCSMC, Vol. 3,
Issue. 12, December 2014, pg.11 – 15.
[11]. Sergios Gerakidis, Sofia Megarchioti, "Efficient Big
Text Data Clustering Algorithms using Hadoop and
Spark", International Journal of Computer Applications
(0975 – 8887), Volume 174 – No. 15, January 2021.
[12]. Chaman Lal, Awais Ahmed, Reshman Siyal, "Text
Clustering using K-MEAN", Volume 10, No.4, July -
August 2021.
[13]. Rafael Gallardo Garcıa, Beatriz Beltran, "Comparison
of Clustering Algorithms in Text Clustering Tasks",
Vol. 24, No. 2, 2020, pp. 429–437.
[14]. Sumit Mayani, Saket Swarndeep, "A Novel Approach
of Text Document Clustering by using Clustering
Techniques", International Research Journal of
Engineering and Technology (IRJET), Volume: 07
Issue: 06 | June 2020.
[15]. Qing Yin, ZhihuaWang, "Improving Deep Embedded
Clustering via Learning Cluster-level Representations",
pages 2226–2236, 2022.
[16]. Vivek Mehta, Seema Bawa, "WEClustering: word
embeddings based text clustering technique for large
datasets", 2021.
[17]. Supakpong Jinarat, Bundit Manaskasemsak, "Short
Text Clustering based on Word Semantic Graph with
Word Embedding Model", IEEE, 2018.
[18]. Shaohan Huang, Furu Wei, "Unsupervised Fine-tuning
for Text Clustering", 2020.
[19]. Nahrain A. Swidan, Shawkat K. Guirguis, "Text
Document Clustering using Hashing Deep Learning
Method", 2020.
[20]. Mehdi Allahyari, Seyedamin Pouriyeh, "A Brief
Survey of Text Mining: Classification, Clustering and
Extraction Techniques", 28 Jul 2017.

Author Profile
Dr. S. Sukumaran, working as Associate Professor,
Department of Computer science (Aided) in Erode Arts and
Science College, Erode, Tamilnadu, India. He is a member
of Board of studies in various Autonomous colleges and
universities. In his 33 years of teaching experience, he has
supervised more than 55 M.Phil research works, guided 21
Ph.D research works and still continuing. He has presented,
published around 80 research papers in National,
International Conferences and Peer Reviewed Journals. His
area of research interest includes Digital Image Processing,
Networking, and Data mining.

IJISRT23JUL1558 www.ijisrt.com 2183

You might also like