Measuring the Impact of E-Portfolio Assessment on the Moroccan Undergraduate EFL Students’ Reading Skill at the University of Moulay Ismail: A Case Study

Motivated by the conviction that formative assessment can foster positive changes in reading instruction, this study seeks to investigate the impact of e-portfolio as a formative assessment tool, on the reading skill of semester two EFL students at the Department of English Studies in Moulay Ismail University of Meknes, Morocco. A pre-posttest experimental design is adopted in this study. The sample of the study consists of thirty-two students who are randomly assigned into two groups: an experimental group with a number of sixteen respondents who have received an e-portfolio assessment and a control group of sixteen participants who had traditional standardized testing. The two groups received different assessment methods but had similar reading comprehension instruction. A reading proficiency Test was administered to the comparative groups on two occasions before and after the experiment. The reading proficiency test constitutes of three multilevel tasks of Literal, Inferential, and Evaluative comprehension. The test scores serve as criteria for statistical comparisons between the comparative groups. The results show that the experimental group demonstrated improved reading comprehension in all three tasks, whereas the control group upgraded only in the literal comprehension task. Overall results of the current study are consistent with past research that provides support for the use of e-portfolio to assess and monitor students’ reading proficiency. The findings and conclusions derived from the present study can serve as a starting point for such an assessment to take place. Decision-makers and teachers can thus promote the use of e-portfolios in language teaching and learning as a formative assessment.


Introduction
As much as learning a language is a complex process, as much as assessing the learning process is a struggle (Brown, 2003). It involves measuring how much the learners have reached beyond the confines of their first language to this new language, new culture, and new way of thinking and feeling. Assessing a language learning experience involves an intricate web of variables at stake which are not to be easily and mechanically put into a set of questions in a standardized test (Alderson, 2001). Brown (2000) argues that the "slippery" nature of language learning makes it not controlled as a mechanical model. The language assessment process is a slipper, especially since theories of learning do not directly assign implications for language assessment practices. It is perhaps due to the fact that schools over the world are beginning to adopt a 'culture of data'. That is, the collection of data on students' level and associating it with the schools' goals (NAESP, 2011). This culture of data comes with the difficulty of determining what data and data instruments will accurately reflect the students learning and the schools goals. Brown (2003) views a quantitative summative assessment of language learning as a contradiction in which he affirms: "isn't it ironic that untold hours of reading, listening to lectures, note taking, writing papers, doing assignments, and going to classes are invariably reduced to one of five letters of the alphabet" (p.281). In the Moroccan higher education context, the educational lives of students are similarly summed up in one to twenty scales of assessment. Honored, average and marginal students are marked not so by the quality of their observed performance but are rather so much identified by their grades. On top of that, the grading criteria are highly variable from a teacher, a school subject, and a school system to another. "Certain institutions are "known" by transcripts evaluators to be stingy with high grades, and therefore a B in those places is equivalent to an A in others" (p.282). The variability of the standards for assigning grades takes place even in the same educational institution, which makes these grades meaningless and often uninterpretable on the institutional level. In other words, the study manifests the importance of the new alternative assessment, which understands assessment as a qualitative and formative process. It empirically examines whether the e-portfolio, as an example of language assessment alternatives, affects the development of the reading skill of ELT learners. In an effort to investigate such use of e-portfolios in the teaching and learning of reading skills, this paper makes use of a pre-posttest experimental design Assessing the reading skill is highlighted in this study since reading is the skill through which the knowledge of ages and percepts of the future are accessed and is the most researched skill by linguists, psychologists, educators, and others (Brown, 2003). The reading process stands for the interaction between a reader and the text. In this process, the reader thinks of what has been read, what the text means, and how to relate that meaning to prior knowledge (Alderson, 2001). Assessment is integral to the reading comprehension process. The most commonly used reading assessments are summative. In the past many times, instructors have used reading comprehension tests to see what a learner does know instead of looking at what the learner needs to know (Alderson, 2001). Learners thus do not get to learn the skills that they need to become proficient reading comprehenders. On the other hand, reading assessment can also be "formative", that is, continuous monitoring of student's progress in reading proficiency across time. Then, teachers are able to use the information gathered in the formative process to adjust their teaching approaches to better meet learners' needs (OECD, 2015). In other words, formative assessment is an assessment for learning. One instance of a formative assessment tool of reading comprehension is portfolio assessment; it is a collection of students' work and an authentic measurement of the learners' reading process because it shows the reading skills as it actually takes place in the classroom on a variety of assignments. This allows instructors to gain insight into the learner's academic growth and provide relevant feedback and evaluation. However, portfolio assessment requires a great deal of teacher' effort to manage a large number of portfolio folders. This disadvantage is balanced by the advantage of using web-based portfolios assessment, which allows students to flexibly put forward the best examples and evidence of their learning from artifacts collected in many media types (Ellis,2008).
Recently, some researchers have studied the topic of e-portfolio assessment, such as the affordances of an intercultural e-portfolio (Hanukaev, 2023), students' views of e-portfolio (Janwarini et al., 2023), e-portfolios in online courses (Kusuma & Waluyo, 2023), and teacher's experiences on the use of e-portfolio in teaching speaking (Lasminiar, 2022). Responding to this growing need to reconsider reading assessment, the current study explores the impact of e-portfolio assessment on the development of reading comprehension proficiency.
The assessment of reading skill has been influenced by the positivist paradigm in the Moroccan ELT (Mullis et al., 2016). The tools for assessing reading have been standardized into a quantitative elicitation of limited information sets from students. That is, "there is little ground for the all too common practice of generalizing from a single sample to a student's ability" (Murphy & Broadfoot, 1995, p.287). The numerical grades that students are given for their final reading assignment may not be true indicators of their reading ability. Hamp-Lyons and Condon (2001) argue that a reliable and valid measurement of language skills requires moving beyond positivism to the constructivist paradigm. Constructivism values the qualitative progress of students rather than ranking them in relation to each other quantitatively, and thus it encloses an assessment for learning. Portfolio and electronic portfolio assessment techniques, for instance, enact a constructive approach and opens the possibility of an interaction between the assessment and learning of the reading skill. From this perspective, the issue of whether the use of an e-portfolio assessment has a positive effect on the development of EFL learners' reading skills against traditional standardized testing is worthy of exploring through this paper.

Literature Review
Reading pedagogy has witnessed a constant shift over time. In the modern era of language education, the grammar-translation method of teaching has remained the most predominant one. It emphasizes the comprehension of the foreign language written text through L1. Under grammartranslation, learning was underpinned by the theory of behaviorism, which perceives the learning of reading as a stimuli-response process triggered by external input. The postmodern era of language studies argues that reliability can be established in such comprehension tests. However, the data generated from these tests do not gauge enough of the learners' complete performance (Baker, 2020). Around the 1970s, constructivism yielded attempts to increase the authenticity of the reading tasks and place greater value on the reader. In other words, the reading comprehension process is subjective, which invokes a rethinking of reading assessment and argues for the inclusion of open-ended questions in reading comprehension tests. Equally important, portfolio assessment meets the requirements of a reader's assessment as complimentary to standardized reading comprehension tests (Efendi et al., 2017). To this end, portfolio and e-portfolio assessment encourages the readers to more interaction with the written text through a personal collection, reflection, and interpretation of reading works (Lam, 2016). It also develops the learner's self-regulated learning capacity.
Based on the above background, reading assessment theory shifted from a naïve realist perspective which views reading as a habit formed through establishing a list of set standards. That is, students who perform well in the given tasks in a reading comprehension test, for instance, are perceived as high achievers; the others who do not are of low levels of accomplishment. the appropriate assessment credo is thus that "[reading] assessment should produce information that is useful in helping students become better readers, and assessment should do no harm" (Afflerbach, 2016). This critique formed pressing reasons behind the emergence of a constructive approach to learning that argues for the subjective nature of reading. The use of a reading portfolio assessment has been a very significant implication of the constructive approach, for it allows students to interpret texts and construct meaning differently. More importantly, portfolios and e-portfolios are considered more authentic methods of assessment since they have more in common with classroom practices. Cognitivism, collaborative learning, and socio-constructivism are integral parts of reading portfolio assessment (Lorenzo & Ittelson, 2015). Cognitivism is exhibited in the fact that a portfolio assessment is process-oriented. It is worth considering that portfolio and e-portfolio assessments provide cognitive evidence of students' progress apropos reading skill (Mokibelo, 2018). Collaborative learning involves students, teachers, parents, and others to form a language-learning community. Put differently, a self, peer, and teacher assessment are all used to facilitate the uptake of language skills. Socio-constructivism, on the other hand, "self-assessment, reflection, context richness and development over time are most relevant to the socio-constructivist view of learning through writing portfolios" (Lam, 2018, p. 15). The aforementioned theories are believed in this research to ground the use of e-portfolio as a formative tool to assess literal comprehension, evaluative comprehension, and inferential comprehension sub-skills of the reading skill proposed by Clark & Rumbold (2006).

Models of reading e-Portfolio assessment
In principle, an e-portfolio assessment underpins the collection of learning evidence by students over an extended period of time, such as a semester or up to one academic year. Throughout this process, learners are asked to gather evidence of learning in terms of complexity and accuracy (Aghazadeh & Soleimani, 2020). The first and most significant question that rises hereby is how to make an e-portfolio. The general development procedures an e-portfolio assessment requires involve choice, variety, and reflection (Reynolds & Rice, 2006). The choice reflects the learner's autonomy to select their preferred reading works for the e-portfolio compact. Variety is the various forms of evidence in the portfolio that represent students' reading performance. Lastly, reflection entails a self-assessment of the reading e-portfolio either through following internal set goals or externally dictated criteria. Burner (2014), on the other hand, suggests five procedures to describe the average portfolio: collection, selection, self-assessment, reflection, and delayed evaluation. This employment of e-portfolio assessment in a number of institutions is achieved following such a framework (Ngui et al., 2020). Portfolios take different forms according to the context and objectives they serve despite the stated procedures. As a way of example, applications of e-portfolios for assessing the reading skills of ELT freshmen in the university context are the following display. The implementation of Bruner's (2014) average portfolio procedures is illustrated in Figure 1.  Figure 1 demonstrates the stages of a typical portfolio assessment in general and a portfolio for reading assessment in particular. After the students are introduced to this system of assessment, they start a continuous collection of their reading works. Selection as, the next stage, is about informed decisions students make to choose the most relevant items which represent their effort throughout the Reflection Self-Assessment Teacher Feedback

Peer-Feedback
Collection Selection Delayed Evaluation Self-Feedback E-Portfolio Keeping specified study period. Students self-assess and reflect on their work which, in fact, takes place automatically when students indulge in the selection process. Self-assessment and reflection are different mechanisms. For example, students are assigned a reading sheet each week to provide a synthesis and their own interpretation of the sheets' themes. Students, then, keep a compact of all these weekly works alongside other reading works they may personally choose to do. Since the portfolio is not an archive of every piece of work students have ever done, only some of the works that students believe they best sample their effort are included in the portfolio. According to Lam (2016), reflection characterizes a more metacognitive thinking process than self-assessment. Some other scholars regard reflection as a part of self-assessment (Burner, 2014). The last step, "Delayed evaluation, means that a summative grade is assigned to a final draft until it is satisfactorily revised with formative feedback" (Lam, 2018). The three types of feedback displayed at the bottom of Figure 1 are the three sources of evaluation students receive through the portfolio process. Double-edged arrows indicate that feedback is generated continuously before the delayed evaluation is informed. Figure 1 does not represent a one-size-fits-all model of the portfolio; teachers can rather apply the portfolio procedures the way they meet the pedagogical needs of their classroom. Figure 2 suggests a portfolio assessment proposed by Lam (2018) that can be flexibly set up by teachers and administrators. It incorporates five stages: purpose, content and procedures, criteria, monitoring, and evaluation. Figure 2. Portfolio assessment framework. (Lam, 2018, p. 35) The organizational flow of the portfolio begins with an introduction to the pedagogical purposes of the portfolio framework that teachers are in need to provide students within the first sessions of the learning program. Equally important, teachers are expected to match the purposes of the portfolio with the learning outcomes.

Research Design
An experimental design was used in this study "because experiments are controlled, they are the best of the quantitative designs to use to establish probable cause and effect" (Cresswell, 2012, p. 295). Out of the 15 different types of experiments proposed by Shadish et al. (2002), this study employs a pre-and post-test true experimental design. Table 4 illustrates the typical use of such quantitative design. The researcher studies the question, "Do students who receive a formative assessment in the form of an e-portfolio, a digital record of a student's reading works over a given study time, excel in reading comprehension than their counterparts who receive only traditional standardized testing? Using a control and experimental group composed of first-year EFL students who participated in the experiment, the researcher gave the control group a standard reading comprehension instruction and a traditional exam, and the other group a standard reading comprehension instruction plus an e-portfolio assessment. Each group met with the same instructor-the researcher-once each of four weeks for two hours. The instructional methodology was designed to be as similar as possible. The only intentional difference between the two groups was the method of assessment: e-portfolios or tests. At the end of the four-week experiment, a posttest is administered to the two groups to compare their reading comprehension levels.

Participants
The participants for this study were 32 freshmen undergraduate students enrolled in the second semester of the English department at Moulay Ismail University. The reason behind selecting first-year ELT students is that reading comprehension courses are provided only at this level. The nature of this study did not allow for reaching a large sample size since participants were individuals who were available to get involved in the experiment and who volunteered to participate.

Procedure
The administration of the School of Arts and Humanities in Meknes provided the researcher with a classroom that was available throughout the whole month of May 2019 for the study's experiment purposes. After the participants showed their interest in participating in the experiment, they took the pretest on May 7 th . The researcher non-randomly divided the participants into two groups. Each group was independently informed about the schedule of the experiment. The experimental group had one session each Wednesday of May 2019 from 14:00 to 16:00. The control group, on the other hand, had sessions every 2019 May Thursday from 14:00 to 16:00. The post-test was administered on May 31 st .

Data Analysis
Alongside descriptive statistical parameters, namely means and standard deviation, this study made use of independent-sample t-tests and paired-sample t-tests to serve the following purposes.
• Independent-sample t-tests were used to describe the difference between the experimental group and the control group on the same test. That is, the experimental and the control group were compared in terms of their pretest and post-test scores. • Paired samples t-test compared the achievement of the same group on two different tests. In practice, the pretest scores of the experimental group were compared with the post-test scores of the same group as well as comparing the achievements of the control group in the pre and post-test.

Results
The measurement of the experimental group improvement in the three reading comprehension tasks was approached in this part of the analysis. To have an elaborated understanding of the impact of the e-portfolio assessment on reading proficiency, each reading task data was analyzed separately.

Development of reading skills in the Literal Comprehension Task
Concerning the literal comprehension task, comprised three components, namely, 'identifying a topic Sentence', 'identifying main ideas', and 'identifying supporting details'. It is observed from Figure  3 and Table 1 that the participants had a statistically significant improvement in three components of the Literal Comprehension posttest, notably word meaning, sentence meaning, and text's main ideas, whereas the text's topic sentence component remained stable. In the Literal Comprehension task, the participants scored in the word meaning component pretest a mean score of 12.44 (SD=2.65). Their mean score in the posttest of the same component was 21.38 (SD=2.57). The difference between the mean scores was statistically significant at p=0.02. Likewise, the mean score of the sentence meaning component in the pretest was 17.44 (SD=2.60) while its mean score in the posttest was 24.38 (SD=0.71). The level of difference was statistically significant at p=0.004. On the other hand, no statistically significant difference was observed in the achievements of the participants in the text's topic sentence component (pretest mean score=14.19, SD=3.10; posttest mean score=14.50, SD=2.78; p=0.30). With regard to the text's main ideas component, the respondents exhibited statistically significant achievements in the posttest at p=0.001 (pretest mean score=16.88, SD=2.91; posttest mean score=24.25, SD=0.77).  Figure 4 and Table 2, the experimental group showed a statistically significant improvement on three reading comprehension components, namely the text's theme, the text's message, and the author's perspective. The other inferential comprehension component of the test's tone was stable. In the text's meaning component, the respondents achieved a mean score of 12.19 (SD=1.27) in the pretest, while the mean score of the posttest was 24.94 (SD=0.25), and the level of difference was highly significant at p=0.003. Similarly, the mean score of the text's message component had highly increased from a mean score of 11.94 (SD=0.92) in the pretest to a mean score of 24.81 (SD=0.54) in the posttest. These statistical data confirmed that the respondents showed a statistically highly significant improvement in the text message component at p=0.001.
When it comes to the text's tone component, the respondents did not exhibit any improvement, especially their achievement in the posttest (pretest mean score=9.31, SD=1.07; posttest mean score=9.44, SD=0.96) at the significance level p=0.26. Whereas in the author's perspective component of the inferential task, the experimental group exhibited a significant improvement in the mean score of the posttest: the group's mean score in the pretest was 9.69 (SD=0.87) and in the posttest was 11.38 (SD=1.40).

Development of the Reading Comprehension Skill in the Evaluative Comprehension Task
In the Evaluative Comprehension task, a statistically significant improvement was observed only on the first two and the last components of the posttest, which were distinguishing facts and opinions, evaluating arguments, and expressing opinions. The third evaluative comprehension component of identifying assumptions remained stable. Figure 5. Development of the four reading comprehension components observed in the Evaluative Comprehension task.
As evidenced in Figure 5 and Table 3, the mean score of distinguishing facts and opinions component of the evaluative comprehension posttest was 11.75 (SD=0.85), while its posttest score was 20.25 (SD=2.56), and the level of difference was highly significant at p=0.003. The mean score for evaluating the arguments component was 8.56 (SD=1.09) in the pretest and 10.81 (SD=2.22) in the posttest. The difference between the mean scores was highly significant at p=0.001. On the other hand, the mean score of identifying assumptions decreased slightly from 7.56 (SD=0.81) in the pretest to 7.69 (SD=0.79) at p=0.001. That is, the respondents showed no improvement in this component. Apropos the component of the evaluative comprehension task, expressing an opinion, the achievements of the respondents increased significantly from 12.63 (SD=0.71) in the pretest to 23.13 (SD=2.06) in the posttest, and the level of significance was p=0.002.

Discussion
The research results are discussed with regard to the research questions showing the impact of eportfolio assessment on the participants' reading skills. Further, studies in the literature are linked to the research outcomes.
• Q1. Does the e-portfolio method of assessment affect the participants' English reading comprehension skills?
The findings of comparing the respondents on the pretest confirmed that all the participants had statistically equal achievements in literal, inferential, and evaluative comprehension tasks of the overall Reading Comprehension Test. Therefore, there was a homogeneity of variance among the participants in the study. This being the case, any noticeable difference between the achievements of the control and the experimental groups was ascribed to the treatment.
The results of comparing the two groups on the posttests provided evidence that the experimental group's achievements in the literal, inferential, and evaluative comprehension tasks surpassed those of the control group. This validated that the use of e-portfolio assessment had a very positive impact on the students' reading comprehension proficiency.
The development of the comparative groups' reading proficiency was further traced by comparing each group's achievement in the pre and post-test separately. Paired t-tests for the reading comprehension tasks showed that the performance of the control group enhanced only in the literal comprehension task posttest, whereas their inferential and evaluative comprehension achievements remained almost stable. The excel of the control group on the literal comprehension task only can be attributed to the fact that this task is s what Gray (1960) described as a literally mere reading of lines. In other words, it was assumed that it was more difficult to attain a critical understanding of a text than inferring meaning, and both of these are considered more difficult than only understanding the literal meaning. On the contrary, the experimental group yielded a statistically high significant improvement in all the reading comprehension tasks. This demonstrated the positive value of the e-portfolio assessment on the students' reading comprehension proficiency. These findings were supported by a number of sources (Al-Hawamdeh dkk., 2023;Hanukaev, 2023;Janwarini dkk., 2023;Kusuma & Waluyo, 2023;Lasminiar, 2022;Li-ping & Ahmad, 2023;Nourdad & Banagozar, 2022) that confirm the use of eportfolios to improve learners' language skills. That is, the characteristics of responsibility, collaboration, autonomy, and critical thinking that students enjoy by virtue of e-portfolio assessment correlate positively with language skills improvement. It was also observed during the experiment that the group who used e-portfolios to publically showcase what they know about reading comprehension showed more incentives to learn more. In the same vein, Ling (2016) confirmed that the fact that e-portfolios are a platform for students to display and publicize their learning products influenced learners to create samples that best represent their learning input. That is, students devoted more time and extended their practice in reading so that they could build their e-portfolios. This infers to what Krashen (1985) asserted, in other words, that more exposure to language output equates to more enhanced language skills. In addition to that, the nature of reading comprehension aligned with the nature of the assessment e-portfolio provided. In fact, Alderson (2001) sees reading as a process of constructing meaning out of an interaction between a reader and a text and that reading comprehension can be assessed by some methods and not by others. On the other hand, reading e-portfolio assessment views reading as a process and corollary required students to procedurally use their knowledge and skills to construct meaning and provide added value to the text's understanding (Goldsmith, 2006).
Overall, the generated results of the t-tests constituted substantial evidence that the e-portfolio assessment as a type of formative assessment can significantly improve students 'reading comprehension proficiency. Yet, further discussion is needed to cater to the effect of e-portfolio assessment on further levels of comprehension represented by the four components of each of the three reading comprehension tasks. The nature of the impact observed on each task aspect was provided independently.
• Q2. Does e-portfolio assessment develop the participants' English literal comprehension?
With regard to the Literal Comprehension task, the treatment group showed a statistically significant improvement on three components of the Literal Comprehension posttest, namely word meaning, sentence meaning, and text's main ideas. Yet, the fourth and last reading sub-construct of the text's topic sentence remained stable. That is, the improvement of the participants' achievements on the first three elements of the literal comprehension task (word meaning, sentence meaning, and text's main ideas) was attributed to the intervention. About the noticed constancy of the text's topic sentence construct, it can be owing to the detailed nature of the test' text provided and the fact that the text discussed multiple concepts in which a topic sentence was not directly expressed. The participants had to use inferential comprehension to figure out the text's topic sentences, whereas the task was on literal comprehension. On the other hand, the e-portfolio assessment enhanced students' ability to identify words meaning. These findings echoed earlier findings in the literature, namely the results reported by Nassirdoost's experimental study on the effect of using portfolio assessment on EFL learners' vocabulary achievement (2015). The study revealed that the students who made a vocabulary portfolio had significantly larger vocabulary sizes in comparison with the control group's vocabulary achievement. He added that because the portfolio triggered learners' motivation to read more, they developed the vocabulary needed to understand and produce various English sentences. Beyond this, portfolio assessment positively developed learners' self-reflection and critical thinking, as evidenced by the achievements of the respondents on the inferential and evaluative comprehension tasks in the present study.
• Q3. Does e-portfolio assessment develop the participants' English evaluative comprehension?
When it comes to the Inferential Comprehension task, a statistically significant improvement was observed on three reading comprehension sub-constructs which were inferring the text's theme, inferring the text's message, and the author's perspective. The last inferential comprehension component of the text's tone was constant. The noticed increase in the participants' achievements on the three reading constructs (text's theme, text's message, and author's perspective) was by means of the e-portfolio assessment treatment. Likewise, Köse (2006) found out that portfolio-based assessment upgraded corollary the level of reading comprehension of students after it increased their awareness of the reading materials. That is to say, a high level of awareness was one of the variables the portfolio assessment promoted, and that helped students to make accurate inferences in a text. In fact, the first and second components of the inferential task-inferring the text's theme and text's message -required students to be aware enough to draw connections within the text, whereas the third construct-inferring author's perspective-was about using prior information to deduce the implicit information in the text. The study's findings were also similar to the ones attained by Demirel and Duman (2014), who conducted a twelve weeks experiment on 31 eight grade participants to figure out the impact of the portfolio on language skills achievements. The study's results exposed that the experimental group outperformed the control group in reading, writing, and listening skills. The researchers stated that the use of portfolio assessment motivated the students to be exposed to more language materials; for instance, they read and write more so that they can be able to select the ideal out of their language products that better represent their linguistic level. On the other hand, the students' improvement spotted on the inferential comprehension task in the study at hand can be due to the fact that reading eportfolio assessment integrated students in a real-life learning experience. That is, the students picked reading texts of their personal interest and attempted to make inferences about the text theme, message, and perspective by applying their life experience and general knowledge. A similar report was made by Huot (2005), who asserted that the use of portfolios to assess language skills provoked students to perform intellectually challenging and real-world tasks. Portfolio assessment encouraged students to JELTL (Journal of English Language Teaching and Linguistics), 8(2), 2023 create knowledge (Lam, 2018). Other constructivist views of language learning regarded earlier in the review of the literature are also in line with the study's findings.
• Q4. Does e-portfolio assessment develop the participants' English Evaluative comprehension? Concerning the Evaluative Comprehension task, only one component of the evaluative comprehension-identifying assumptions-in which participants marked a minute decrease. It may be the reason that assumptions are too implicit to be noticed since they are integrated into our belief system, which we assume to be true and use to make inferences about the world around us (Paul & Elder, 2012). However, a statistically significant enhancement was spotted on three reading sub-constituents of the posttest, particularly distinguishing facts and opinions, evaluating arguments, and expressing opinions. That being so, the e-portfolio assessment affects the participants' development of reading skills. Congruent with these findings, Harrison (2004, p.174) claimed that good reading e-portfolio assessment qualified readers to: ▪ set themselves purposeful reading and writing goals; ▪ decide where they need to look for reading resources; ▪ navigate effectively toward those resources; ▪ adjudicate thoughtfully between possible sources of information, rejecting, selecting, and prioritizing; ▪ decide which parts of the chosen sources will be useful, rejecting, selecting, prioritizing; ▪ decide on how to use the sources: to edit, order, transform, critique; ▪ produce a new artifact, matched to its audience; ▪ evaluate the adequacy of their performance, revising and looping back to earlier stages of the process as appropriate.
The aspects of the e-portfolio process just listed helped readers not only to produce accurate inferences of the text but to go beyond the text lines and develop evaluative commentary (Harrison, 2004). For a similar purpose, research (Barrs et al., 1989) was conducted to figure out whether traditional standardized testing or portfolio assessment promoted advanced independent reading. An advanced independent reader was viewed as someone who reads different types of texts and is able to comprehend, criticize and comment on the texts meaning. The findings demonstrated that with such a newer assessment, students had the opportunity to postulate rich and qualitative evidence of their reading proficiency. This expanded their reading practices, and they thus became more skilled comprehenders. Unlike traditional multiple-choice testing of reading proficiency, which takes place on a single occasion, performance assessments such as portfolios emphasize varied authentic activities that result in a cumulative portrait of the student (Harrison, 2004). Lam (2018) illustrated that standardized testing made students distracted by marks and grades teachers assign. Hence, they focused more on performance than learning. Whereas, portfolio assessment provided students with actionable feedback and qualitative commentaries, and students become thus more disposed to absorb teachers' feedback and adjust it to their learning experience. Alike, e-portfolio helped develop what Schumm & Post (1997) called critical readers who, by definition, distinguish facts from opinions, evaluate ideas and arguments, and express their opinions. Also, a network portfolio system is testified in a study done by Liu, Zhuo, & Yuan (2004). It revealed that it enhanced students' critical and analytical thinking. As a matter of fact, evaluation and self-reflection are constant steps in the e-portfolio assessment process in which students evaluate the overall content they selected to be inserted in the online showcase (Carey, 2009). The eportfolio, thus, encouraged students to evaluate information gathered or generated by the use of critical reasoning. Critical reasoning and evaluation are actually a component of the Higher Order Thinking Skills, HOTS (Alharbi dkk., 2022;Zain et al., 2022). Accordingly, an empirical study (Faravani & Atai, 2015) explored the relationship between portfolio assessment and Higher Order Thinking Skills (HOTS) and revealed that the study's experimental group outperformed the control group concerning their HOTS posttests. In detail, learners became more aware and conscientious of the learning process.
Since they participated in the assessment process through self-assessment, the scoring process renders instruction more learner-centered. In fact, learner-centered pedagogy is an essential requirement of the development of the HOTS (Kharisma & Lestari, 2022). Goldsmith (2006), on the other hand, looked into the added value of using an electronic portfolio assessment rather than a paper-based portfolio.
The "e" makes portfolios available showcases that can be viewed and reviewed easily. The move to a web-based portfolio allowed students to constantly and flexibly reflect on and evaluate their works. In brief, one of the most positive results of the e-portfolio-based assessment for students was their ability to have a literal, inferential, and evaluative comprehension of a text. These findings are supported by the constructivist theories of language development reviewed earlier.
As an experimental study that examines the relationship between e-portfolio assessment and developing language skills, the findings from this study point to future studies in this area. Good examples under this context are the following: ▪ Exploring teachers' and students' perceptions of the use of e-portfolio to assess language skills rather than standardized testing. ▪ Examining the effect of technology and paper-based portfolio assessment on developing students' language learning. ▪ Measuring the impact of e-portfolio assessment on developing learners' higher-order thinking skills. ▪ Probing the relationship between e-portfolio assessment and students' achievement in language learning. ▪ Investigating teachers' and students' attitudes toward the use of formative assessment in tertiary education.

Conclusion
Grounded in constructivist perspectives of language assessment, this study explored the impact of e-portfolio-based assessment on undergraduate freshmen' reading skills in the Moroccan EFL context. A pre-posttest experimental design is used in this study. Thirty-two students enrolled in the 1st year of English studies at Moulay Ismail University were randomly assigned to two groups. Sixteen students in the experimental group received a reading e-portfolio assessment, and a control group with 16 participants had traditional standardized testing. A reading comprehension Test was administered to the comparative groups before and after a month of treatment. The two groups undertook a reading comprehension Test composed of three multilevel tasks of literal, inferential, and evaluative comprehension. Statistical analysis of data obtained from the three tasks' pre and post-test revealed strong evidence that e-portfolio assessment enhanced reading comprehension of the experimental group while the control group showed slight improvement only in the literal comprehension achievements. Further in-depth analysis of the three reading tasks of the test to trace the improvement of the treatment group showed significant progress in all the reading comprehension components except the constructs of identifying the text's topic sentence, tone, and assumptions. The minuscule decline in achievements of the participants in these components was due to the implicitness of them in the given text. Yet, the use of technology in educational settings, in general, and in language assessment, in particular, promotes constructive learning.

Implications and Recommendations
It is hoped that the findings and conclusions derived from the present study can serve as a starting point for such an assessment to take place. Decision-makers and teachers can thus promote the use of e-portfolios in language teaching and learning as a formative assessment. Potential implications for students are also worth noting. Learners can use e-portfolios as preparation guides for their written examinations. They can also reflect on their learning progress by virtue of e-portfolios. Although it is not easy to implement an e-portfolio assessment in a large class, much effort on behalf of the teacher and the learner is required. JELTL (Journal of English Language Teaching and Linguistics), 8(2), 2023 As the Moroccan literature lacks concerns about the use of technology-based assessment in improving students' language skills, this paper's objectives still await illumination by research. A replication of this type of study in different settings with a large sample size is foremost recommended. A longitudinal study under the same topic using development pre and post-tests would reveal in-depth results. Conducting the study in various contexts would reveal differences in results, if any, among participants of different backgrounds. Another recommendation would be to measure the impact of alternative language assessments on improving language learning. Also, the study's objective would be done using a different methodology such as a qualitative or a mixed method.
Results from the present study suggest that special education teachers can efficiently and effectively deliver the intervention to students with behavioral concerns and EBD. However, the experimental design did not specifically test the generality of findings to the larger population of students with EBD. Future research could continue to expand the intervention to include larger samples of the intended population.