Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a highly skilled and proactive Data Engineer to join our dynamic data analytics team. In this role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and solutions on the Google Cloud Platform (GCP). With your expertise, you'll enable data-driven decision-making, contribute to strategic business initiatives, and ensure robust data infrastructure. This position offers an opportunity to work in a collaborative environment with a focus on innovative technologies and continuous growth. Software Requirements Required: Proficiency in Data Engineering tools and frameworks such as Hive , Apache Spark , and Python (version 3.x) Extensive experience working with Google Cloud Platform (GCP) offerings including Dataflow, BigQuery, Cloud Storage, and Pub/Sub Familiarity with Git , Jira , and Confluence for version control and collaboration Preferred: Experience with additional GCP services like DataProc, Data Studio, or Cloud Composer Exposure to other programming languages such as Java or Scala Knowledge of data security best practices and tools Overall Responsibilities Design, develop, and optimize scalable data pipelines on GCP to support analytics and reporting needs Collaborate with cross-functional teams to translate business requirements into technical solutions Build and maintain data models, ensuring data quality, integrity, and security Participate actively in code reviews, adhering to best practices and standards Develop automated and efficient data workflows to improve system performance Stay updated with emerging data engineering trends and continuously improve technical skills Provide technical guidance and support to team members, fostering a collaborative environment Ensure timely delivery of deliverables aligned with project milestones Technical Skills (By Category) Programming Languages: EssentialPython (required) PreferredJava, Scala Data Management & Databases: Experience with Hive, BigQuery, and relational databases Knowledge of data warehousing concepts and SQL proficiency Cloud Technologies: Extensive hands-on experience with GCP services including Dataflow, BigQuery, Cloud Storage, Pub/Sub, and Composer Ability to build and optimize data pipelines leveraging GCP offerings Frameworks & Libraries: Spark (PySpark preferred), Hadoop ecosystem experience is advantageous Development Tools & Methodologies: Agile/Scrum methodologies, version control with Git, project tracking via JIRA, documentation on Confluence Security Protocols: Understanding of data security, privacy, and compliance standards Experience Requirements Minimum of 6-8 years in data or software engineering roles with a focus on data pipeline development Proven experience in designing and implementing data solutions on cloud platforms, particularly GCP Prior experience working in agile teams, participating in code reviews, and delivering end-to-end data projects Experience working with cross-disciplinary teams and understanding varied stakeholder requirements Exposure to industry best practices for data security, governance, and quality assurance is desired Day-to-Day Activities Attend daily stand-up meetings and contribute to project planning sessions Collaborate with business analysts, data scientists, and other stakeholders to understand data needs Develop, test, and deploy scalable data pipelines, ensuring efficiency and reliability Perform regular code reviews, provide constructive feedback, and uphold coding standards Document technical solutions and maintain clear records of data workflows Troubleshoot and resolve technical issues in data processing environments Participate in continuous learning initiatives to stay abreast of technological developments Support team members by sharing knowledge and resolving technical challenges Qualifications Bachelor's or Masters degree in Computer Science, Information Technology, or a related field Relevant professional certifications in GCP (such as Google Cloud Professional Data Engineer) are preferred but not mandatory Demonstrable experience in data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills, with a focus on outcome-driven solutions Excellent communication and interpersonal skills to effectively collaborate within teams and with stakeholders Ability to work independently with minimal supervision and manage multiple priorities effectively Adaptability to evolving technologies and project requirements Demonstrated initiative in driving tasks forward and continuous improvement mindset Strong organizational skills with a focus on quality and attention to detail S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 5 days ago
4.0 - 9.0 years
8 - 18 Lacs
Chennai, Coimbatore, Vellore
Work from Office
We at Blackstraw.ai. are organizing a Walk-in Interview Drive for Data Engineers with minimum 3 years exp in Data Engineer Data Engineer Mini 3 Years Exp in Python, Spark, PySpark, Hadoop, Hive, Snowflake , AWS , Databricks We are looking for a Data Engineer to join our team. You will use various methods to transform raw data into useful data systems. You'll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and an understanding of machine learning methods. If you are detail-oriented, with excellent organizational skills and experience in this field, wed like to hear from you. Job Requirements Participate in the customer's system design meetings and collect the functional/technical requirements. Responsible to meet the customer expectations on real-time data integrity and implementing efficient solutions A clear understanding of Python, Spark, PySpark, Hive, Kafka, and RDBMS architecture. Experience in writing Spark/Python programs and SQL queries. Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Good to have: Knowledge of CI/CD concepts, Apache Kafka Key traits: Should have excellent communication skills. Should be self-motivated and willing to work as part of a team. Should be able to collaborate and coordinate in a remote environment. Be a problem solver and be proactive to solve the challenges that come his way. Important Instructions: Do carry a hard copy of your resume, one passport photograph, along with a government identity proof for ease of access to our premises. *Please note: Do not carry any electronic devices apart from your mobile phone at office premises.* Please send us your resume to chennai.walkin@blackstraw.ai *Kindly fill up below form to submit you registration form: https://forms.gle/LtNYvGM8pbxMifXw6 Preference will be given for Immediate Joiners or who can join within 10-15 days.
Posted 5 days ago
6.0 - 11.0 years
18 - 33 Lacs
Noida, Pune, Delhi / NCR
Hybrid
Iris Software has been a trusted software engineering partner to several Fortune 500 companies for over three decades. We help clients realize the full potential of technology-enabled transformation by bringing together a unique blend of domain knowledge, best-of-breed technologies, and experience executing essential and critical application development engagements. Tittle - Sr Data Engineer/ Lead Data Engineer Experience - 5-12 years Location - Delhi/NCR, Pune Shift - 12:30- 9:30 pm IST 6+ years of experience in data engineering with a strong focus on AWS services. Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshiftfor data warehousing and analytics Proficiency in SQL, Python, or PySpark for data processing. Experience with data modeling, partitioning strategies, and performance optimization. Familiarity with orchestration tools like AWS Step Functions, Apache Airflow, or Glue Workflows. If Intersted, Kindly share your resume on kanika.singh@irissoftware.com Note - Notice Period max 1 month
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Gen #NJP
Posted 5 days ago
0 years
14 - 21 Lacs
Greater Kolkata Area
On-site
Our Client is a professional services firm, is the Indian member firm affiliated with International and was established in September 1993. Our professionals leverage the global network of firms, providing detailed knowledge of local laws, regulations, markets, and competition. Our client has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, and Vadodara. Our client in India offers services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused, and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Responsibilities: Build and optimize data pipelines using Python and Pyspark Perform data analysis and generate reports Write and maintain SQL queries for data extraction Requirements Qualifications: Proficiency in Python, Pyspark, and SQL Experience in data analysis and pipeline development Strong analytical and problem-solving abilities Benefits Work with one of the Big 4's in India Healthy work Environment Work-Life Balance
Posted 5 days ago
7.0 - 12.0 years
0 Lacs
Tamil Nadu, India
Remote
Tiger Analytics is global analytics consulting firm. With data and technology at the core of our solutions, we are solving some of the toughest problems out there. Our culture is modelled around expertise and mutual respect with a team first mindset. Working at Tiger, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire. We are headquartered in the Silicon Valley and have our delivery centres across the globe. The below role is for our Chennai or Bangalore office, or you can choose to work remotely. About The Role As a Program Lead – Healthcare Analytics & Technology, you will be responsible for driving the architecture, delivery, and governance of Azure-based data solutions across multiple programs. You will play a strategic role in data transformation initiatives while mentoring team members and collaborating with stakeholders across functions. The role also requires exposure to advanced analytics, data science, and LLM integration in production environments, along with strong Healthcare domain experience. KRAs If you are looking for an entrepreneurial environment, and are passionate to work on unstructured business problems that can be solved using data, we would like to talk to you. Lead design and implementation of scalable cloud data platforms Enable advanced analytics and AI by operationalizing structured and unstructured data flows Drive data governance, security, and compliance across systems Oversee CI/CD pipelines, DevOps automation, and release management Drive data analysis, insights generation displaying strong knowledge in the Healthcare domain Collaborate with stakeholders to translate business needs into scalable data solutions Mentor team members and ensure technical alignment across cross-functional teams Independently manage multiple projects with high impact and visibility Required Skills, Competencies & Experience 7-12 years of experience in data engineering and analytics, primarily on Azure Strong knowledge and experience of working in the Healthcare industry Deep expertise in ADF, Azure Databricks, Synapse, Delta Lake, and Unity Catalog Strong in data modelling, Python, Pyspark, SQL, and managing all data types Proven experience in implementing CI/CD and DevOps for data projects Familiarity with LLMs, machine learning, and operationalization within Azure Strong leadership, project management, and stakeholder communication skills Certifications such as Azure Solutions Architect or Databricks Data Engineer Professional are preferred Location: Delhi / NCR preferred Designation will be commensurate with expertise/experience. Compensation packages among the best in the industry.
Posted 5 days ago
3.0 - 6.0 years
11 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Posted 5 days ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Description : 5 to 10 Years of work exp. Strong understanding in data warehousing principals and data modelling Experience with AI/ML Ops – model build through implementation lifecycle in AWS Cloud environment and/or Snowflake Expert with SQL including knowledge of advanced query optimization techniques - build queries and data visualizations to support business use cases/analytics. Proven experience with software tools including Pyspark and Python, PowerBI, QuickSight and core AWS tools such as Lambda, RDS, Cloudwatch, Cloudtrail, SNS, SQS, etc. Hands-on experience on Snowflake, Snowsight, Snowpark, Snowpipe, SnowQL Experience building services/APIs on AWS Cloud environment. Data ingestion and curation as well as implementation of data pipelines. Experience working and leading vendor partner (on- and off-shore) resources. Experience in Informatica/ETL technology. Experience in DevOps and microservices would be preferred.
Posted 5 days ago
5.0 - 10.0 years
13 - 23 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Summary: We are looking for a skilled and experienced Data Engineer with strong hands-on expertise in PySpark to design, build, and manage large-scale data processing systems. The ideal candidate will have a passion for working with big data technologies and delivering scalable solutions. Experience in the telecom domain is a plus but not mandatory. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark on distributed systems (e.g., Hadoop, Databricks, or EMR). Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions. Perform data extraction, transformation, and loading (ETL/ELT) processes from diverse data sources. Ensure data quality, integrity, and reliability through robust data validation and monitoring frameworks. Write optimized Spark code for performance and scalability across big data ecosystems. Participate in code reviews and contribute to best practices for data engineering. Work with structured and unstructured data, and integrate real-time and batch data pipelines. Support and maintain existing data pipelines, troubleshoot issues, and enhance them as required. Required Skills & Experience: 5 to 15 years of experience in Data Engineering or related fields. Strong hands-on experience in PySpark and distributed data processing. Solid understanding of data warehousing concepts, data modeling, and big data architecture. Proficiency with big data tools and platforms (e.g., Hadoop, Hive, Spark, HDFS). Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Proficient in writing efficient SQL queries and working with large datasets. Strong knowledge of programming in Python . Experience with version control systems like Git. Nice to Have: Experience working in the telecom domain , with understanding of telecom data and KPIs. Exposure to workflow orchestration tools such as Airflow , Oozie , or similar. Familiarity with data governance, data cataloging, and data security principles. Educational Qualifications: Bachelors or Masters degree in Computer Science, Engineering, Information Technology, or a related field. Why Join Us: Work on cutting-edge big data projects in a dynamic and collaborative environment. Opportunity to influence data strategy and build scalable data infrastructure. Flexible working hours and competitive compensation.
Posted 5 days ago
6.0 - 11.0 years
30 - 35 Lacs
Pune
Work from Office
About The Role : Job Title - Full Stack Developer (Mainly DB , ETL experience) Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, Legal, Compliance, AFC, Audit, Corporate Services etc.] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all of DWS Corporate Functions globally. We are seeking a highly skilled and experienced ETL developer with Informatica tool experience along with strong development experience on various RDMS along with exposure to cloud-based platforms. The ideal candidate will be responsible for designing, developing, and implementing robust and scalable custom solutions, extensions, and integrations in a cloud-first environment. This role requires a deep understanding of data migration, system integration, and optimization, cloud-native development principles, and the ability to work collaboratively with functional teams and business stakeholders. The role needs to provide support to the US business stakeholders & regulatory reporting processes during morning US hours. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities This role will be primarily responsible for creating good quality software designs and hence strong sense of software design principles is required Will get involved with hands-on code development Thorough testing of developed software Mentor junior team members in both technical and functional front Do code review of other team members Participate and manage daily stand-up meetings Articulate issues and risks to management in timely manner This role will require 80% Technical involvement and 20% on other activities like team handling, mentoring, status reporting, year-end appraisals Analyse software defects and fix them in timely manner Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Supports testing on behalf of users, operations, and testing teams potentially including test plans, test cases, test-data and review of interface testing, between different applications, when required. Work with application developers to resolve functional issues from UATs, and to help find solutions for various functional difficulty areas. Works closely with business analysts detail proposed solutions and solution maintenance. Work with Application Management area for functional area trouble shooting and resolution to reported bugs / issues on applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) Should be hands on in technology Minimum 10 years of IT industry experience Proficient in Informatica or any ETL tool Hands-on experience on Oracle SQL/PL SQL Exposure to PostgreSQL Exposure to Cloud / Big Data technology CI/CD (Team City or Jenkins) GitHub usage Basic commands on UNIX Exposure to Control-M scheduling tool Worked in Agile/Scrum software development environment. High analytical capabilities Proven communication skills Must be an effective problem solver Able to Multi-task and work under tight deadlines Identifying and escalating problems at an early stage Flexibility and willingness to work autonomously Self-motivated within set competencies in a team and fast paced environments High degree of accuracy and attention to detail Nice to have Any exposure to PySpark will be Plus Any exposure to React JS or Angular JS will be plus Architecting and automating the build process for production, using scripts Worked in Agile/Scrum software development atmosphere. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We at DWS are committed to creating a diverse and inclusive workplace, one that embraces dialogue and diverse views, and treats everyone fairly to drive a high-performance culture. The value we create for our clients and investors is based on our ability to bring together various perspectives from all over the world and from different backgrounds. It is our experience that teams perform better and deliver improved outcomes when they are able to incorporate a wide range of perspectives. We call this #ConnectingTheDots.
Posted 5 days ago
0.0 - 5.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Publicis Sapient is looking for Python developers to join our team of bright thinkers and enablers You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions Key Responsibility Statement Design and develop scalable PySpark data pipelines to ensure efficient processing of large datasets, enabling faster insights and business decision-making Leverage Databricks notebooks for collaborative data engineering and analytics, improving team productivity and reducing development cycle times Write clean, modular, and reusable Python code to support data transformation and enrichment, ensuring maintainability and reducing technical debt Implement data quality checks and validation logic within ETL workflows to ensure trusted data is delivered for downstream analytics and reporting Optimize Spark jobs for performance and cost-efficiency by tuning partitions, caching strategies, and cluster configurations, resulting in reduced compute costs Your Skills & Experience: Solid understanding of Python programming fundamentals, especially in building modular, efficient, and testable code for data processing Familiarity with libraries like pandas, NumPy, and SQLAlchemy (for lightweight transformations or metadata management) Proficient in writing and optimizing PySpark code for large-scale distributed data processing Deep knowledge of Spark internals, partitioning, shuffling, lazy evaluation, and performance tuning Comfortable using Databricks notebooks, clusters, and Delta Lake Additional Information Set Yourself Apart With Familiarity with cloud-native services like AWS S3, EMR, Glue, Lambda, or Azure Data Factory Experience deploying or integrating pipelines within a cloud environment adds flexibility and scalability Experience with tools like Great Expectations or custom-built validation logic to ensure data trustworthiness A Tip From The Hiring Manager This person should be highly organized, adapt quickly to change, and thrive in a fast-paced organization This is a job for the curious, make-things-happen kind of person Someone who thinks like an entrepreneur and can motivate and move their team to achieve and drive impact Benefits Of Working Here Gender-Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Employee Assistance Programs to help you in wellness and well-being
Posted 5 days ago
1.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that all components function seamlessly and meet client requirements. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark and AZURE Synapse.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance issues in application code. Additional Information:- The candidate should have minimum 2 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
4.0 - 9.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in risk and compliance focus on maintaining regulatory compliance and managing risks for clients, providing advice, and solutions. They help organisations navigate complex regulatory landscapes and enhance their internal controls to mitigate risks effectively. Those in enterprise risk management at PwC will focus on identifying and mitigating potential risks that could impact an organisation's operations and objectives. You will be responsible for developing business strategies to effectively manage and navigate risks in a rapidly changing business environment. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the OFRO - QA team you will đảm bảo the quality and accuracy of dashboards and data workflows through meticulous testing and validation. As a Senior Associate, you will leverage your knowledge in data analysis and automation testing to mentor others, navigate complex testing environments, and uphold quality standards throughout the software development lifecycle. This position provides an exciting opportunity to work with advanced BI tools and contribute to continuous improvement initiatives in a dynamic team setting. Key Responsibilities ETL Development & Data Engineering Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and custom Python scripts. Integrate and ingest data from on-prem, cloud, and third-party APIs into modern data platforms. Perform data cleansing, validation, and transformation to ensure data quality and consistency. Machine learning experience is desirable. Programming and Scripting Write robust and reusable Python scripts for data processing, automation, and orchestration. Develop complex SQL queries for data extraction, transformation, and reporting. Optimize code for performance, scalability, and maintainability. Cloud & Platform Integration Work within Azure ecosystems, including Blob Storage, SQL Database, ADF, Synapse, and Key Vault. Use Databricks (PySpark/Delta Lake) for advanced transformations and big data processing. Good to have PowerBI hands-on. Collaboration And Communication Work closely with cross-functional teams to ensure quality throughout the software development lifecycle. Provide regular status updates and test results to stakeholders. Participate in daily stand-ups, sprint planning, and Agile ceremonies. Shift time : 2pm to 11pm IST Total exp required - 4-9 years
Posted 5 days ago
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture strategy. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies.- Strong understanding of data integration processes and tools.- Experience with data warehousing concepts and practices.- Familiarity with ETL processes and data pipeline development.- Ability to work with various database management systems. Additional Information:- The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySparkMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with PySpark, Scala.- Strong understanding of data integration and ETL processes.- Familiarity with cloud computing concepts and services.- Experience in application lifecycle management and agile methodologies. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
4 - 8 Lacs
Pune
Work from Office
Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have).
Posted 5 days ago
2.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), ScalaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that align with business objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Python (Programming Language), Scala.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to application development. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency in application development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Good to have skills - AWS S3, DeltaLake, Airflow- Experience should be 4+ years in Python- Candidate must be a strong Hands-on senior Developer- Candidate must possess good technical / non-technical communication skills to highlight areas of concern/risks- Should have good troubleshooting skills to do RCA of prod support related issues Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required.- Candidate must be willing to work in Shift B i.e. from 11 AM IST to 9PM IST. Also, do the weekend support as per a pre-agreed rota. Compensation holiday may be provided for the weekend shift Qualification 15 years full time education
Posted 5 days ago
5.0 - 8.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Job Title :AWS Data Engineer Location :Bangalore Notice Period : Immediate to 60 Days Preferred. Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Databricks. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. . - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 5-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS) is a plus. - Familiarity with Spark Streaming is a bonus.
Posted 5 days ago
5.0 - 10.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Apache SparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and are aligned with the business goals. Your typical day will involve collaborating with the team to understand the application requirements, designing and developing the applications using PySpark, and configuring the applications to meet the business process needs. You will also be responsible for testing and debugging the applications to ensure their functionality and performance. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Design and build applications using PySpark.- Configure applications to meet business process requirements.- Collaborate with the team to understand application requirements.- Test and debug applications to ensure functionality and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Spark.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
12.0 - 15.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : PySparkMinimum 12 year(s) of experience is required Educational Qualification : Graduate Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with PySpark.- Strong understanding of data integration and ETL processes.- Experience in developing scalable applications using cloud technologies.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A Graduate is required. Qualification Graduate
Posted 5 days ago
2.0 - 5.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization techniques and tools. Additional Information:- The candidate should have minimum 2 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required.- Candidate should be ready to work in rotational shift Qualification 15 years full time education
Posted 5 days ago
4.0 - 9.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and problem-solving among team members. Roles & Responsibilities:Minimum of 4 years of experience in data engineering or similar roles.Proven expertise with Databricks and data processing frameworks.Technical Skills SQL, Spark, Py spark, Databricks, Python, Scala, Spark SQLStrong understanding of data warehousing, ETL processes, and data pipeline design.Experience with SQL, Python, and Spark.Excellent problem-solving and analytical skills.Effective communication and teamwork abilities. Professional & Technical Skills: Experience and knowledge of Azure SQL Database, Azure Data Factory, ADLS Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Data Services.- This position is based in Pune.- A 15 year full time education is required. Qualification 15 years full time education
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who are we? Infosys (NYSE: INFY) is a global leader in consulting, technology, and outsourcing solutions. We enable clients, in more than 46 countries, to stay a step ahead of emerging business trends and outperform the competition. Infosys Consulting (IC) partners with clients from strategy through execution to transform their businesses in areas such as business /IT strategy, processes, organization, systems and risk. Infosys Consulting has 2600+ people across the US, Europe, APAC, and India, contributing over $628m in consulting revenue annually. We are Value Integrators – we deliver realized business value by managing transformations from strategy / setting direction through execution, including operating and optimizing delivered solutions. IC – SURE (Services, Utilities, Resources & Energy) is dedicated to serving Oil & Gas , Utilities, Resources and Service firms globally. The team in India works with its overseas counterparts and client teams to provide business consulting services to clients in the US, Europe, and Asia Pacific markets. Responsibilities: Supporting pursuits with large Oil & Gas/Utilities prospects by articulating Infosys’ unique value proposition through practical use cases across the value chain. Gathering, identifying, and documenting business requirements and creating functional specifications for new systems and processes. Assessing as-is processes, conducting gap analysis, designing to-be processes, and recommending changes. Experience with Six Sigma, Lean, or similar methodologies to drive continuous improvement in technology projects. Technology Project Management, including managing technology vendors and client stakeholders Managing large projects and programs in a multi-vendor, globally distributed team environment, leveraging Agile principles and DevOps capabilities. Collaborating closely with the IT Project Management Office. Supporting the implementation of client-specific digital solutions, including business case development, IT strategy, and tool/software selection. Design and implement scalable data pipelines, ETL/ELT workflows, and optimized data models across cloud data warehouses and lakes, enabling reliable access to high-quality data for business insights and strategic decision-making. Build and maintain dashboards, reports, and visualizations using tools such as Power BI, Tableau, etc. Write SQL queries and scripts to extract, clean, and manipulate data from multiple sources and conduct deep-dive analyses to evaluate business performance, identify opportunities, and support operational decisions. Integrate and govern data from diverse enterprise systems, ensuring data quality, integrity, and compliance with security and governance standards—supporting business-critical reporting, analytics, and regulatory needs. Collaborate with business stakeholders to translate strategic objectives into data-driven solutions, defining KPIs, uncovering actionable insights from structured and unstructured data, and enabling self-service analytics through partnerships with analysts and product teams. Working closely with client IT teams and business stakeholders to uncover opportunities and derive actionable insights. Participating in internal firm-building activities such as knowledge management. Supporting sales efforts for new and existing clients through proposal creation and sales presentation facilitation. Document data workflows, solutions, and processes clearly for both technical teams and business users. Work in Agile teams to manage data projects, align with PMO initiatives, and ensure business-focused delivery in global, multi-vendor environments. Support digital solution delivery including IT strategy, business case development, tool selection, and implementation. Contribute to client pursuits and internal knowledge-sharing by presenting digital use cases and supporting proposal development. Required Qualifications: 3–5 years of experience in data engineering with a strong track record in business-facing roles such as Business Analysis, Product Design, or Project Management—ideally within digital technology initiatives in the Oil & Gas or Utilities sector. Strong grasp of business analysis principles with proven experience in gathering and documenting requirements, and translating business needs into effective technical designs Excellent communication skills—both written and verbal—with the ability to convey ideas to technical and non-technical audiences, Skilled in data integration, transformation, and orchestration tools such as AWS Glue, Pyspark, Python, Azure Data Factory, SparkSQL, SQL, Palantir, data bricks Pipeline Builder, with hands-on experience using project and workflow tools like Azure DevOps (ADO), JIRA, VSTS, or ServiceNow (SNOW). Skilled in data visualization tools such as Power BI, Tableau, Palantir Contour, Palantir Workshop or similar, with hands-on experience using project and workflow tools like Azure DevOps (ADO), JIRA, VSTS, or ServiceNow (SNOW). Broad understanding of one or more modern digital technologies (e.g., Robotic Process Automation, Digital Transformation, Business Intelligence, AI/ML, Big Data, Data Analytics, IoT). Bachelor’s degree or Full-time MBA/PGDM from Tier 1/Tier 2 B-Schools in India or foreign equivalent. Preferred Qualifications: Knowledge of one or more digital technologies (Robotic Process Automation, Digital transformation, Business Intelligence, Artificial Intelligence, Machine Learning, Big Data technologies, Data Analytics, IoT etc.) and its application in Oil & Gas/Utilities Industry Strong knowledge of agile development practices (Scrum), methodologies and tools. Excellent teamwork, written and verbal communication skills. Ability to communicate ideas in both technical and user-friendly language. Ability to work as part of a cross-cultural team including flexibility to support multiple time zones when necessary Ability to interact at mid-level managers of clients’ organizations Understanding of SDLC (Software Development Lifecycle) Proven ability to work in multidisciplinary teams and to build strong relationships with clients Preferred Location(s): Electronic City, Phase 1, Bengaluru, Karnataka Pocharam Village, Hyderabad, Telangana Sholinganallur, Chennai, Tamil Nadu Hinjewadi Phase 3, Pune, Maharashtra Sector 48, Tikri, Gurgaon, Haryana Kishangarh, Chandigarh Jaipur Ahmedabad Indore Location of Posting is subject to business needs and requirement The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email, or face to face. Please note this description does not cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. EOE/Minority/Female/Veteran/Disabled/Sexual Orientation/Gender Identity
Posted 5 days ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in shaping the direction of application projects and ensuring that they meet the needs of the organization and its clients. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with PySpark, Scala.- Strong understanding of data engineering principles and practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough