Jobs
Interviews

3093 Data Processing Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

3 - 5 Lacs

Pune, Chennai

Work from Office

This is role is for a Assoc Developer to work on building global plaforms that are hosted across multiple countries in both public and private cloud private cloud environments. We value people who are passionate around solving business problems through innovation and engineering practices. We embrace a culture of experimentation and constantly strive for improvement and learning. Youll work in a collaborative environment that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. Objectives of this role Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your tasks Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Required skills and qualifications 1 - 3 years of experience as a Python Developer with a strong portfolio of projects. Experience working with Airflow Bachelor s degree in computer science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred skills and qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community. Impact Youll Make: At TransUnion, we are dedicated to finding ways information can be used to help people make better and smarter decisions. As a trusted provider of global information solutions, our mission is to help people around the world access the opportunities that lead to a higher quality of life, by helping organizations optimize their risk-based decisions and enabling consumers to understand and manage their personal information. Because when people have access to more complete and multidimensional information, they can make more informed decisions and achieve great things. Every day TransUnion offers our employees the tools and resources they need to find ways information can be used in diverse ways. Whether it is helping businesses better manage risk, providing better insights so a consumer can qualify for his first mortgage or working with law enforcement to make neighborhoods safer, we are improving the quality of life for individuals, families, communities and local economies around the world. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Assoc Developer, Applications Development

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

We are seeking talented Senior Software Engineers to join our Engineering team, supporting Search Engineering efforts. In this role, you will play a key part in designing and optimizing data infrastructure, enabling real-time and batch data processing to enhance search retrieval, ranking, and product experiences. You will work closely with BE and ML engineers, data scientists, and product teams to build robust, scalable, and high-performance data systems that power personalized user experiences. What the Candidate Will Need / Bonus Points What the Candidate Will Do ---- Develop serving infrastructure to enhance system latency, throughput, and reliability Enhance search relevance by improving indexing, retrieval, and ranking mechanisms. Develop and optimize search algorithms, ranking models, and query processing techniques. Implement and maintain scalable search pipelines and distributed indexing systems. Work with machine learning engineers to integrate AI-driven search ranking and personalization models. Analyze search performance metrics and run A/B experiments to measure improvements. Optimize latency, throughput, and scalability of search infrastructure. Contribute to system design and architecture decisions to improve search quality and efficiency. Write clean, efficient, and maintainable code in Python, Java, or Go. Collaborate with cross-functional teams to enhance search relevance and user experience. Monitor and troubleshoot search-related production issues to ensure system reliability. Basic Qualifications ---- 5+ years of experience in software engineering Expertise in big data technologies such as Apache Spark, Kafka, Flink, Airflow, Presto, or Snowflake. Strong experience with search and recommendation systems, working with Elasticsearch, OpenSearch, Solr, or similar technologies. Proficiency in distributed data processing frameworks and real-time streaming architectures. Deep understanding of data modeling, ETL pipelines, and data warehousing principles. Strong programming skills in Golan, Python, Scala, or Java. Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure tools. Ability to work on high-scale distributed systems and troubleshoot performance bottlenecks. Strong problem-solving and analytical skills, with a passion for data-driven decision-making. Preferred Qualifications ---- Hands-on experience with search technologies such as Elasticsearch, OpenSearch, Solr, or Vespa. Familiarity with search retrieval, ranking techniques, query understanding, and text processing. Ubers mission is to reimagine the way the world moves for the better. Here, bold ideas create real-world impact, challenges drive growth, and speed fuelds progress. What moves us, moves the world - let s move it forward, together. Offices continue to be central to collaboration and Ubers cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role.

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Gurugram

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 9 Lacs

Bengaluru, Karnataka, India

On-site

Cradlepoint is seeking an innovative Automation and AI Architect with deep experience in working with Large Language Models (LLMs) and a strong passion for Natural Language Understanding (NLU) . This pivotal role will focus on crafting prompts that bring out the most accurate, context-aware, and human-like responses from AI systems. As part of our AI & GenAI initiatives, you'll work on complex problem statements in generative AI, building intelligent applications that support our digital transformation journey. What You Will Do: Key Responsibilities Design and develop prompts for a variety of applications, including text generation, translation, question answering, and creative writing. Collaborate closely with product teams and engineers to deeply understand user needs and translate them into effective prompts. Analyze and iterate on prompts based on performance metrics and user feedback to ensure the delivery of high-quality outputs. Conduct experiments and research to test new prompting techniques and continuously optimize existing workflows. Stay at the forefront of advancements in Natural Language Processing (NLP) and AI , applying these insights directly to your work. Document and communicate your work clearly and concisely to both technical and non-technical audiences. Maintain an organized and efficient workspace, consistently meeting deadlines and managing multiple projects simultaneously . The Skills You Bring: Required Qualifications Experience: Minimum 6+ years of experience in prompt engineering, NLP, AI/ML, or data-related roles. At least 3 years of proven experience in prompt engineering or a related role within the AI and Chatbots domain. Technical Proficiency: Strong understanding of NLP concepts and techniques . Extensive experience working with Large Language Models (LLMs) and other AI systems. Hands-on experience with Python for scripting, automation, and AI model interaction. Practical exposure to Azure Data Factory (ADF) for building and managing data pipelines. Strong working knowledge of Snowflake for data processing and analysis. Soft Skills: Excellent written and verbal communication skills, coupled with a passion for language and storytelling. Ability to think creatively and solve problems, adapting to new challenges and changing priorities. Strong analytical skills and meticulous attention to detail to evaluate and optimize prompts. Ability to collaborate effectively within a cross-functional team. Programming (Plus): Proficiency in programming languages such as Python and experience with relevant libraries and frameworks are a significant plus.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

You will be joining TUSK Research, a company dedicated to empowering businesses with AI-driven solutions for optimizing market research operations. As a Senior Decipher Survey Programmer based in Mumbai, your primary responsibilities will include developing and programming questionnaires in Decipher, overseeing data processing tasks, and supporting various market research projects. Your role will involve designing and testing surveys, maintaining data quality, debugging, and providing technical assistance to project teams. Effective collaboration with researchers and analysts to ensure smooth survey execution will be a key aspect of your job. To excel in this role, you should possess experience in Market Research and questionnaire development, proficiency in programming and basic data processing, strong communication skills, keen attention to detail for ensuring data quality, and the ability to work collaboratively in a team environment. A Bachelor's degree in Computer Science, Information Technology, Market Research, or a related field is required, along with a minimum of 5 years of experience working with Decipher. Additionally, you should be willing to work out of the Mumbai office for at least 3 days a week to fulfill the job requirements effectively.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Systems Engineer specializing in Data DevOps/MLOps, you will play a crucial role in our team by leveraging your expertise in data engineering, automation for data pipelines, and operationalizing machine learning models. This position requires a collaborative professional who can design, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment. You will be responsible for building and maintaining infrastructure for data processing and model training using cloud-native tools and services. Your role will involve automating processes for data validation, transformation, and workflow orchestration, ensuring seamless integration of ML models into production. You will work closely with data scientists, software engineers, and product teams to optimize performance and reliability of model serving and monitoring solutions. Managing data versioning, lineage tracking, and reproducibility for ML experiments will be part of your responsibilities. You will also identify opportunities to enhance scalability, streamline deployment processes, and improve infrastructure resilience. Implementing security measures to safeguard data integrity and ensure regulatory compliance will be crucial, along with diagnosing and resolving issues throughout the data and ML pipeline lifecycle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field, along with 4+ years of experience in Data DevOps, MLOps, or similar roles. Proficiency in cloud platforms like Azure, AWS, or GCP is required, as well as competency in using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Expertise in containerization and orchestration technologies like Docker and Kubernetes is essential, along with a background in data processing frameworks such as Apache Spark or Databricks. Skills in Python programming, including proficiency in data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch, are necessary. Familiarity with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions, as well as understanding version control tools like Git and MLOps platforms such as MLflow or Kubeflow, will be valuable. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana), strong problem-solving skills, and the ability to contribute independently and within a team are also required. Excellent communication skills and attention to documentation are essential for success in this role. Nice-to-have qualifications include knowledge of DataOps practices and tools like Airflow or dbt, an understanding of data governance concepts and platforms like Collibra, and a background in Big Data technologies like Hadoop or Hive. Qualifications in cloud platforms or data engineering would be an added advantage.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are a highly skilled Senior Python Developer who will be responsible for designing and developing scalable and efficient software applications using Python and AWS services. Your role involves collaborating with cross-functional teams to ensure high-quality software applications. Key Responsibilities: Design and develop scalable and efficient software applications using Python and AWS services, including the development of RESTful APIs using Flask or Django, data processing and analytics pipelines using AWS services (e.g. S3, Lambda, Glue), and cloud-based applications using AWS services (e.g. EC2, RDS, Elastic Beanstalk). Collaborate with cross-functional teams, including development teams to ensure testability and feasibility of requirements, Quality Assurance teams to ensure alignment with testing methodologies and standards, and Product Management teams to ensure alignment with product vision and requirements. Develop and maintain AWS services, including S3 bucket management and data processing, Lambda function development and deployment, and Glue data catalog management and ETL development. Participate in testing activities, including unit testing using Python testing frameworks (e.g. unittest, pytest), integration testing using AWS services (e.g. S3, Lambda), and end-to-end testing using AWS services (e.g. API Gateway, Elastic Beanstalk). Collaborate with development teams to ensure timely and accurate defect fixes, including defect tracking and prioritization, defect reproduction and debugging, and defect verification and closure. Stay up-to-date with the latest AWS services and cloud-based technologies, and apply this knowledge to improve software applications and efficiency. Requirements: 5+ years of experience in software development, with a strong understanding of Python and AWS services. Strong understanding of testing frameworks and tools, including Python testing frameworks (e.g. unittest, pytest) and AWS services (e.g. S3, Lambda, Glue). Experience with cloud-based architectures and AWS services, including EC2, RDS, Elastic Beanstalk, S3, Lambda, and Glue. Strong problem-solving skills, with the ability to troubleshoot and debug complex issues. Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams. Bachelor's degree in Computer Science or related field, or equivalent experience. Nice to Have: Experience with Agile development methodologies and Scrum frameworks. Knowledge of containerization using Docker. Familiarity with DevOps tools, such as Jenkins or GitLab. Certification in AWS services or related technologies. Experience with security and compliance frameworks, such as HIPAA or PCI-DSS. What We Offer: Competitive salary and benefits package. Opportunities for career growth and professional development. Collaborative and dynamic work environment. Flexible working hours and remote work options. Access to the latest technologies and tools. Recognition and rewards for outstanding performance. AWS Services Experience: Experience with AWS services, including S3, Lambda, Glue, EC2, RDS, Elastic Beanstalk, API Gateway, and CloudFormation. Experience with AWS SDKs and tools, including Boto3, AWS CLI, and AWS SDKs for Python. Experience with AWS best practices and security guidelines, including IAM roles and permissions, VPC and subnet configuration, security groups and network ACLs, and data encryption and access control.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As a Media Planning Specialist, you will be responsible for building a strategic plan to achieve brand objectives by selecting appropriate genres and channels based on factors such as Reach, Share, Affinity, Clutter, and Stickiness. Your role will involve scheduling campaigns effectively to enhance visibility, tracking and evaluating campaign performance, and preparing television reports on a monthly basis. Additionally, you will analyze competitors" activities, conduct category studies, and collaborate with channels to optimize GRPs. You will also be involved in releasing campaigns, monitoring spots, and ensuring seamless coordination with channels to maintain GRPs. Your responsibilities will include presenting Pre & Post Campaign Analysis in the form of a Deck, participating in new business development pitches, and evaluating plans using tools such as BARC. Proficiency in BARC, MPA, and TGI Software is essential for this role. To qualify for this position, you should have at least 4 years of experience in media planning within a Media Agency or Trading desk. A Master's degree in Business, Marketing, or Advertising is required, along with advanced proficiency in English. In addition to technical qualifications, you should possess soft skills such as understanding business dynamics, taking initiative, focusing on performance, and building collaborative relationships. Your competencies should include expertise in media planning and buying, mastery of MX principles (Connection, Context, Content), strong negotiation skills, programmatic knowledge, data processing abilities, and a deep understanding of marketing strategy. Familiarity with tools like BARC, YUMI, IRM, and MAP, operational performance management tools such as Pilot and IOMT, and advertising technologies including ad servers and ad platforms will be advantageous in this role.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

The Testing Specialist will collaborate closely with the Business and Delivery teams to implement the test strategy and fulfill identified business requirements, ensuring the delivery of business value. With 8 to 10 years of experience in Quality Assurance and ETL testing, you will lead the Quality Engineering and Assurance (QEA) team, responsible for ensuring the quality and reliability of software and data products within the Databricks environment. Your responsibilities will include developing and overseeing the quality assurance strategy and test planning for Databricks" products and solutions. You should possess a good understanding of efficient data pipelines utilizing Azure Databricks and its native services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, and Azure Blob Storage. Continuous assessment and enhancement of quality assurance processes and methodologies will be crucial to improve the efficiency and effectiveness of the QA team. You will drive the automation of testing processes, including the development and maintenance of test scripts and frameworks, and manage the entire testing lifecycle from test case creation to defect tracking and reporting. Collaboration with development and product management teams is essential to ensure software releases meet quality standards and are delivered on time. You will establish and monitor key quality metrics and performance indicators to gain insights into the quality of Databricks" products. Working with cross-functional teams, including development, data engineering, and data science, will be necessary to embed quality into all stages of product development. You will be responsible for ensuring that Databricks" products comply with security and compliance standards, conducting security testing as needed, and overseeing the identification and resolution of software defects by collaborating closely with development teams. Maintaining detailed documentation of testing processes, test cases, and results will also be part of your role. Qualifications: - Bachelor's or Master's degree in computer science, software engineering, or a related field. - Extensive experience in quality engineering, software testing, and quality assurance, with a proven track record of leadership and management. - Strong knowledge of software testing methodologies, test automation tools, and quality assurance best practices. - Experience with big data technologies, data processing, and analytics platforms relevant to Databricks. - Strong leadership and communication skills to collaborate effectively with diverse teams and stakeholders. - Familiarity with Databricks" platform and related technologies is often preferred. - Experience with cloud platforms like AWS, Azure Cloud may be beneficial. - Knowledge of data governance, security, and compliance standards.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

Join our innovative team as a Python developer with Java expertise. You will be responsible for developing scalable data-intensive applications, APIs, and scripts to ensure optimized performance. Collaboration with cross-functional teams, participation in code reviews, and contribution to best practices are key aspects of this role. Your work will involve complex data-driven projects where you will leverage your expertise to design, develop, and deploy high-quality solutions. Our team values open communication, collaborative problem-solving, and continuous learning. As a Python developer with Java expertise, your responsibilities will include designing, developing, and deploying Python applications and scripts for data processing and analysis. You will create RESTful APIs using Java for data integration, develop and optimize data retrieval and querying mechanisms, troubleshoot and optimize data query performance and retrieval, participate in code reviews, and contribute to best practices. Additionally, you will debug issues and provide effective solutions while collaborating with cross-functional teams to identify and prioritize project requirements. To qualify for this role, you should have a Bachelor of Technology (B.Tech) or equivalent in Computer Science/Information Technology and a minimum of 6 years of overall experience in software development. You should also possess at least 3 years of experience in Python development, 3 years of experience in Java development, and 2 years of experience with MySQL. Strong oral and written communication skills in English, as well as excellent problem-solving skills, are essential for this position. It would be beneficial to have experience with Artificial Intelligence (AI) and Machine Learning (ML) concepts, familiarity with AI/ML frameworks such as TensorFlow, PyTorch, and Scikit-Learn, and knowledge of Generative AI, Agents, and Chatbots. About Us: As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. Partnered with industry leaders across various sectors, Oracle continues to thrive after over 40 years of change by operating with integrity. Oracle is committed to fostering an inclusive workforce that promotes opportunities for all employees. Emphasizing a work-life balance, Oracle offers competitive benefits based on parity and consistency, including flexible medical, life insurance, and retirement options. Employees are encouraged to contribute to their communities through volunteer programs. Oracle is dedicated to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodations for a disability, please contact us at accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

kerala

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself and a better working world for all. We are looking for an Executive Assistant who would be required to work in a team environment in delivering the secretarial needs of the global EY office senior management team such as diary management, calendar management, travel management, documentation, tool support, and other administrative requirements that may arise on a need basis. The primary role and responsibility of this position will be to work in a team environment and deliver administrative services including, but not limited to Diary Management, Calendar Management, Meetings Management, Travel Management, Workshop or Area Visit Plan, Documentation, Training Management, Tool Support and Administration, and Data Processing and Administration. The role requires someone who can manage several concurrent activities with strong multi-tasking, prioritization, organizational, and time management skills. The ideal candidate should have strong project coordination skills, be comfortable using IT systems, possess excellent written and oral communication skills, and be a strong team player who is comfortable working collaboratively with others. Additionally, the candidate must be able to work virtually and independently, respond well to deadlines, work outside of normal hours when required, and work in a rapidly changing environment while prioritizing accordingly. To qualify for the role, you must have a graduate or postgraduate degree. Ideally, you should have 1 to 4 years of experience, with at least a year of experience working in a team environment handling virtual secretarial services being preferred. A good command over English (written & spoken) is mandatory. EY Global Delivery Services (GDS) is a dynamic and truly global delivery network that offers fulfilling career opportunities across various business disciplines. In GDS, you will collaborate with EY teams on exciting projects, work with well-known brands, and have access to continuous learning opportunities. EY provides the tools and flexibility for you to make a meaningful impact your way, offers transformative leadership insights and coaching, and fosters a diverse and inclusive culture where you can be embraced for who you are and empowered to use your voice to help others find theirs. EY exists to build a better working world, helping to create long-term value for clients, people, and society, and build trust in the capital markets. Across assurance, consulting, law, strategy, tax, and transactions, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate by asking better questions to find new answers for the complex issues facing our world today.,

Posted 1 week ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Kolkata, Pune, Mumbai (All Areas)

Work from Office

Job Roles & Responsibilities Analyze and interpret complex datasets using Python to drive strategic decisions, Develop and maintain data models and reporting tools to support business objectives, Collaborate with cross-functional teams to identify

Posted 1 week ago

Apply

2.0 - 6.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let s do this. Let s change the world. In this vital role you will join a collaborative team implementing and supporting the next generation of safety platforms and supporting technologies. In this role, you will analyze and resolve issues with adverse event data and file transmissions across integrated systems, leveraging data analytics to identify trends, optimize workflows, and prevent future incidents. Collaborating closely with various teams, you will develop insights and implement solutions to improve system performance, ensuring reliable and efficient data flow critical to safety operations. Monitor, solve, and resolve issues related to adverse event data processing across the safety ecosystem. Triage and conduct detailed investigations into system disruptions, data anomalies, or processing delays to determine and nature and scope of the problem Work closely with internal teams, external vendors, and business partners to address dependencies and resolve bottlenecks for critical issues and triage the issues and provide L1/L2 support Identify inefficiencies and propose data-driven solutions to optimize and enhance reliability. Present findings and recommendations to leadership, ensuring data-driven decision-making and clear transparency into system operations. Support compliance with Key Control Indicators (KCI) and chips in to overall process governance What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Bachelor s degree and 2 to 6 years of Life Science / Biotechnology / Pharmacology / Information Systems experience Demonstrated expertise in monitoring, troubleshooting, and resolving data and system issues. Good understanding of Pharmacovigilance process and knowledge on safety systems like (Argus, Arisg, LSMV etc. ) Basic familiarity with ITSM tools like Service now or JIRA Identify and escalate potential safety/compliance issues. Familiarity with database technologies and querying tools, including SQL (Oracle SQL, PL/SQL preferred). Experience with testing methodologies, tools, and automation practices. Familiarity with regulatory compliance testing (e. g. , FDA 21 CFR Part 11, GAMP Experienced in Agile methodology Preferred Qualifications: Understanding of API integrations and middleware platforms (e. g. , MuleSoft). Outstanding written and verbal communication skills, and ability to explain technical concepts to non-technical clients Sharp learning agility, problem solving and analytical thinking Experienced in GxP systems and implementing GxP projects Experience in SDLC, including requirements, design, testing, data analysis, change control Certification: SAFe for Teams certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Ability to deal with ambiguity and think on their feet Shift Information: This position requires you to work a later shift and will be assigned third shift schedule (Overnight shift on a rotational basis). Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. .

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 5 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let s do this. Let s change the world. In this vital role you will join a collaborative team implementing and supporting the next generation of safety platforms and supporting technologies. In this role, you will analyze and resolve issues with adverse event data and file transmissions across integrated systems, leveraging data analytics to identify trends, optimize workflows, and prevent future incidents. Collaborating closely with various teams, you will develop insights and implement solutions to improve system performance, ensuring reliable and efficient data flow critical to safety operations. Monitor, solve, and resolve issues related to adverse event data processing across the safety ecosystem. Triage and conduct detailed investigations into system disruptions, data anomalies, or processing delays to determine and nature and scope of the problem Work closely with internal teams, external vendors, and business partners to address dependencies and resolve bottlenecks for critical issues and triage the issues and provide L1/L2 support Identify inefficiencies and propose data-driven solutions to optimize and enhance reliability. Present findings and recommendations to leadership, ensuring data-driven decision-making and clear transparency into system operations. Support compliance with Key Control Indicators (KCI) and chips in to overall process governance What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Bachelor s degree and 2 to 6 years of Life Science / Biotechnology / Pharmacology / Information Systems experience Demonstrated expertise in monitoring, troubleshooting, and resolving data and system issues. Good understanding of Pharmacovigilance process and knowledge on safety systems like (Argus, Arisg, LSMV etc. ) Basic familiarity with ITSM tools like Service now or JIRA Identify and escalate potential safety/compliance issues. Familiarity with database technologies and querying tools, including SQL (Oracle SQL, PL/SQL preferred). Experience with testing methodologies, tools, and automation practices. Familiarity with regulatory compliance testing (e. g. , FDA 21 CFR Part 11, GAMP Experienced in Agile methodology Preferred Qualifications: Understanding of API integrations and middleware platforms (e. g. , MuleSoft). Outstanding written and verbal communication skills, and ability to explain technical concepts to non-technical clients Sharp learning agility, problem solving and analytical thinking Experienced in GxP systems and implementing GxP projects Experience in SDLC, including requirements, design, testing, data analysis, change control Certification: SAFe for Teams certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Ability to deal with ambiguity and think on their feet Shift Information: This position requires you to work a later shift and will be assigned third shift schedule (Overnight shift on a rotational basis). Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. .

Posted 1 week ago

Apply

8.0 - 10.0 years

22 - 27 Lacs

Pune

Work from Office

AI/ML Engineer (Specializing in NLP/ML, Large Data Processing, and Generative AI) Job Summary Synechron seeks a highly skilled AI/ML Engineer specializing in Natural Language Processing (NLP), Large Language Models (LLMs), Foundation Models (FMs), and Generative AI (GenAI). The successful candidate will design, develop, and deploy advanced AI solutions, contributing to innovative projects that transform monolithic systems into scalable microservices integrated with leading cloud platforms such as Azure, Amazon Bedrock, and Google Gemini. This role plays a critical part in advancing Synechrons capabilities in cutting-edge AI technologies, enabling impactful business insights and product innovations. Software Requirements Required Proficiency: Python (core libraries: TensorFlow, PyTorch, Hugging Face transformers, etc.) Cloud platforms: Azure, AWS, Google Cloud (familiarity with AI/ML services) Containerization: Docker, Kubernetes Version control: Git Data management tools: SQL, NoSQL databases (e.g., MongoDB) Model deployment and MLOps tools: MLflow, CI/CD pipelines, monitoring tools Preferred Skills: Experience with cloud-native AI frameworks and SDKs Familiarity with AutoML tools Additional programming languages (e.g., Java, Scala) Overall Responsibilities Design, develop, and optimize NLP models, including advanced LLMs and Foundation Models, for diverse business use cases. Lead the development of large data pipelines for training, fine-tuning, and deploying models on big data platforms. Architect, implement, and maintain scalable AI solutions in line with MLOps best practices. Transition legacy monolithic AI systems into modular, microservices-based architectures for scalability and maintainability. Build end-to-end AI applications from scratch, including data ingestion, model training, deployment, and integration. Implement retrieval-augmented generation techniques for enhanced context understanding and response accuracy. Conduct thorough testing, validation, and debugging of AI/ML models and pipelines. Collaborate with cross-functional teams to embed AI capabilities into customer-facing and enterprise products. Support ongoing maintenance, monitoring, and scaling of deployed AI systems. Document system designs, workflows, and deployment procedures for compliance and knowledge sharing. Performance Outcomes: Production-ready AI solutions delivering high accuracy and efficiency. Robust data pipelines supporting training and inference at scale. Seamless integration of AI models with cloud infrastructure. Effective collaboration leading to innovative AI product deployment. Technical Skills (By Category) Programming Languages: Essential: Python (TensorFlow, PyTorch, Hugging Face, etc.) Preferred: Java, Scala Databases/Data Management: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, DynamoDB) Cloud Technologies: Azure AI, AWS SageMaker, Bedrock, Google Cloud Vertex AI, Gemini Frameworks and Libraries: Transformers, Keras, scikit-learn, XGBoost, Hugging Face engines Development Tools & Methodologies: Docker, Kubernetes, Git, CI/CD pipelines (Jenkins, Azure DevOps) Security & Compliance: Knowledge of data security standards and privacy policies (GDPR, HIPAA as applicable) Experience Requirements 8 to 10 years of hands-on experience in AI/ML development, especially NLP and Generative AI. Demonstrated expertise in designing, fine-tuning, and deploying LLMs, FMs, and GenAI solutions. Proven ability to develop end-to-end AI applications within cloud environments. Experience transforming monolithic architectures into scalable microservices. Strong background with big data processing pipelines. Prior experience working with cloud-native AI tools and frameworks. Industry experience in finance, healthcare, or technology sectors is advantageous. Alternative Experience: Candidates with extensive research or academic experience in AI/ML, especially in NLP and large-scale data processing, are eligible if they have practical deployment experience. Day-to-Day Activities Develop and optimize sophisticated NLP/GenAI models fulfilling business requirements. Lead data pipeline construction for training and inference workflows. Collaborate with data engineers, architects, and product teams to ensure scalable deployment. Conduct model testing, validation, and performance tuning. Implement and monitor model deployment pipelines, troubleshoot issues, and improve system robustness. Document models, pipelines, and deployment procedures for audit and knowledge sharing. Stay updated with emerging AI/ML trends, integrating best practices into projects. Present findings, progress updates, and technical guidance to stakeholders. Qualifications Bachelors degree in Computer Science, Data Science, or related field; Masters or PhD preferred. Certifications in AI/ML, Cloud (e.g., AWS, Azure, Google Cloud), or Data Engineering are a plus. Proven professional experience with advanced NLP and Generative AI solutions. Commitment to continuous learning to keep pace with rapidly evolving AI technologies. Professional Competencies Strong analytical and problem-solving capabilities. Excellent communication skills, capable of translating complex technical concepts. Collaborative team player with experience working across global teams. Adaptability to rapidly changing project scopes and emerging AI trends. Innovation-driven mindset with a focus on delivering impactful solutions. Time management skills to prioritize and manage multiple projects effectively.

Posted 1 week ago

Apply

5.0 - 9.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Job Description As a Sr. Associate BI Engineer, you will support the development and delivery of data-driven solutions that enable business insights and operational efficiency. You will work closely with senior data engineers, analysts, and stakeholders to support and build dashboards, analyze data, and contribute to the design of scalable reporting systems. This is an ideal role for early-career professionals looking to grow their technical and analytical skills in a collaborative environment Roles & Responsibilities: Designing and maintaining dashboards and reports using tools like Spotfire , Power BI, Tableau. Perform data analysis to identify trends and support business decisions. Gather BI requirements and translate them into technical specifications. Support data validation, testing, and documentation efforts. Apply best practices in data modeling, visualization, and BI development. Participate in Agile ceremonies and contribute to sprint planning and backlog grooming What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree / Bachelors degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Must have strong knowledge of Spotfire. Exposure to other data visualization tools such as Power BI, Tableau, or QuickSight. Proficiency in SQL and scripting languages (e. g. , Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Understanding of data structures and reporting concepts Strong analytical and problem-solving skills Preferred Qualifications: Familiarity with Cloud services like AWS (e. g. , Redshift, S3, EC2, IAM ), Databricks (Deltalake, Unity catalog, token etc) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 1 week ago

Apply

0.0 - 1.0 years

1 - 1 Lacs

Kolkata

Work from Office

Responsibilities: * Maintain confidentiality of sensitive information * Collaborate with team on project deliverables * Input data accurately using computer software * Verify accuracy through data checks

Posted 1 week ago

Apply

4.0 - 8.0 years

8 - 11 Lacs

Bengaluru

Work from Office

Role Overview: Skyhigh Security is hiring a Business Intelligence Developer to join our Engineering Excellence organization. Reporting to the VP of Engineering Excellence, youll play a key role in helping the company measure, visualize, and optimize software delivery performance by integrating data from across the engineering toolchain and building actionable, insightful dashboards. About the Role Skyhigh Security is hiring a Business Intelligence Developer to join our Engineering Excellence organization. Reporting to the VP of Engineering Excellence , youll play a key role in helping the company measure, visualize, and optimize software delivery performance by integrating data from across the engineering toolchain and building actionable, insightful dashboards. Working alongside engineering teams, DevOps, and engineering-quality leaders, youll design and implement dashboards that surface DORA metrics and other critical indicators of engineering velocity, quality, and stability. Your work will provide engineering leaders and teams with the visibility they need to drive continuous improvement, reduce bottlenecks, and increase delivery confidence. This role combines deep technical BI skills with an understanding of software development workflows . You will own the integration of data from tools like Jira, GitHub, Jenkins, and CI/CD pipelines , and apply your expertise in SQL, Python, and visualization tools to transform raw data into insights that matter. Responsibilities: Design and build dashboards that track DORA metrics and other Agile delivery KPIs. Integrate data from developer tools (e.g., Jira, GitHub, CI/CD platforms) via APIs, webhooks, and ETL pipelines. Define data models and transformations to support clear, reliable visualizations. Partner with engineering leaders to define metric strategies aligned to business and technical outcomes. Ensure reporting solutions are scalable, accurate, and tailored to various stakeholder needs. Requirements: Strong skills in SQL and Python for data processing and integration. Hands-on experience with Tableau , EazyBI , Power BI , or similar tools. Deep understanding of DORA metrics , Agile metrics, and engineering performance indicators. Familiarity with toolchain APIs (e.g., Jira, GitHub, Jenkins, ArgoCD) and ETL practices. Strong data modeling and storytelling skills with a focus on clarity and actionability.

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Pune

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Noida

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 1 week ago

Apply

0.0 - 2.0 years

0 - 1 Lacs

Goregaon, Mumbai (All Areas)

Work from Office

Key Responsibilities: Assist in the preparation and maintenance of financial records, ensuring compliance with company policies and regulatory requirements. Support the preparation of financial statements, reports, and reconciliations. Assist in managing accounts payable and receivable, ensuring accuracy in all transactions. Ensure timely and accurate posting of all financial transactions to the accounting system. Assist with monthly, quarterly, and year-end closing activities. Help prepare and analyze financial statements and reports, identifying trends or issues for management review. Support audits by providing necessary documentation and addressing auditor queries. Collaborate with other departments to streamline financial operations and improve efficiency. Requirements: Bachelor's degree in Accounting, Finance, or a related field. Proven knowledge of accounting principles, financial reporting, and relevant regulations. Familiarity with accounting software (e.g., SAP, QuickBooks, Tally, or similar ). Strong Excel skills and proficiency in financial analysis. Excellent attention to detail and accuracy. Strong communication.

Posted 1 week ago

Apply

0.0 - 1.0 years

0 - 1 Lacs

Thane, Mumbai (All Areas)

Work from Office

Good typing speed & accuracy Basic computer knowledge Enter, update, and verify data in systems accurately Maintain records and prepare reports as needed Coordinate with internal teams to ensure data consistency Required Candidate profile Min 6 months exp in back office/data entry. 24/7 rotational shifts.

Posted 1 week ago

Apply

0.0 - 1.0 years

0 - 1 Lacs

Thane

Work from Office

Good typing speed & accuracy Basic computer knowledge Enter, update, and verify data in systems accurately Maintain records and prepare reports as needed Coordinate with internal teams to ensure data consistency Required Candidate profile Min 6 months exp in back office/data entry. 24/7 rotational shifts.

Posted 1 week ago

Apply

2.0 - 5.0 years

2 - 4 Lacs

Thiruvananthapuram

Work from Office

Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies