Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
8 - 13 Lacs
Chennai
Work from Office
Overview We are ooking for a highy skied Lead Engineer to spearhead our data and appication migration projects. The idea candidate wi have in-depth knowedge of coud migration strategies, especiay with AWS, and hands-on experience in arge-scae migration initiatives. This roe requires strong eadership abiities, technica expertise, and a keen understanding of both the source and target patforms. Responsibiities Lead end-to-end migration projects, incuding panning, design, testing, and impementation. Coaborate with stakehoders to define migration requirements and goas. Perform assessments of existing environments to identify the scope and compexity of migration tasks. Design and architect scaabe migration strategies, ensuring minima downtime and business continuity. Oversee the migration of on-premises appications, databases, and data warehouses to coud infrastructure. Ensure the security, performance, and reiabiity of migrated workoads. Provide technica eadership and guidance to the migration team, ensuring adherence to best practices. Troubeshoot and resove any technica chaenges reated to the migration process. Coaborate with cross-functiona teams, incuding infrastructure, deveopment, and security. Document migration procedures and essons earned for future reference. Lead end-to-end migration projects, incuding panning, design, testing, and impementation. Coaborate with stakehoders to define migration requirements and goas. Perform assessments of existing environments to identify the scope and compexity of migration tasks. Design and architect scaabe migration strategies, ensuring minima downtime and business continuity. Oversee the migration of on-premises appications, databases, and data warehouses to coud infrastructure. Ensure the security, performance, and reiabiity of migrated workoads. Provide technica eadership and guidance to the migration team, ensuring adherence to best practices. Troubeshoot and resove any technica chaenges reated to the migration process. Coaborate with cross-functiona teams, incuding infrastructure, deveopment, and security. Document migration procedures and essons earned for future reference.
Posted 1 month ago
5.0 - 10.0 years
7 - 11 Lacs
Pune
Work from Office
5+ years of experience with BI toos, with expertise and/or certification in at east one major BI patform - Tabeau preferred. Advanced knowedge of SQL, incuding the abiity to write compex stored procedures, views, and functions. Proven capabiity in data storyteing and visuaization, deivering actionabe insights through compeing presentations. Exceent communication skis, with the abiity to convey compex anaytica findings to non-technica stakehoders in a cear, concise, and meaningfu way. Identifying and anayzing industry trends, geographic variations, competitor strategies, and emerging customer behavior Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Provide expertise in anaysis, requirements gathering, design, coordination, customization, testing and support of reports, in cient’s environment Deveop and maintain a strong working reationship with business and technica members of the team Reentess focus on quaity and continuous improvement Perform root cause anaysis of reports issues Deveopment / evoutionary maintenance of the environment, performance, capabiity and avaiabiity. Assisting in defining technica requirements and deveoping soutions Effective content and source-code management, troubeshooting and debugging Preferred technica and professiona experience Troubeshooting capabiities to debug Data contros Capabe of converting business requirements into workabe mode. Good communication skis, wiingness to earn new technoogies, Team Payer, Sef-Motivated, Positive Attitude. Must have thorough understanding of SQL & advance SQL (Joining & Reationships)
Posted 1 month ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Design, deveop, and manage our data infrastructure on AWS, with a focus on data warehousing soutions. Write efficient, compex SQL queries for data extraction, transformation, and oading. Utiize DBT for data modeing and transformation. Use Python for data engineering tasks, demonstrating strong work experience in this area. Impement scheduing toos ike Airfow, Contro M, or she scripting to automate data processes and workfows. Participate in an Agie environment, adapting quicky to changing priorities and requirements Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Mandatory Skis: Candidate shoud have worked on traditiona Data warehousing with any database (Orace or DB2 or SQL Server) (Redshift optiona) Candidate shoud have string SQL skis and abiity to write compex queries using anaytica functions. Prior working experience on AWS patform Python programming experience for data engineering .Experience in PySpark/Spark Working knowedge of Data Pipeines too Airfow The beow skis are nice to haveExperience with DBT, Exposure to working in an Agie environment. Proven abiity to troubeshoot and resove production issues under a DevOps mode A track record of continuousy identify opportunities to improve the performance and quaity of your ecosystem. Experience monitoring performance and ensuring Preferred technica and professiona experience Knowedge of DBT for data modeing and transformation is a pus. Experience with PySpark or Spark is highy desirabe
Posted 1 month ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and AWS Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark, Scaa, and Hive, Hbase or other NoSQL databases on Coud Data Patforms (AWS) or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / AWS eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa ; Minimum 3 years of experience on Coud Data Patforms on AWS; Experience in AWS EMR / AWS Gue / DataBricks, AWS RedShift, DynamoDB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in AWS and Data Bricks or Coudera Spark Certified deveopers
Posted 1 month ago
3.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Hiring for a FAANG company. Note: This position is part of a program designed to support women professionals returning to the workforce after a career break (9+ months career gap) About the Role: A global analytics team is seeking a Business Intelligence Engineer to drive data-backed insights, design robust reporting frameworks, and influence key business strategies across international markets. This role is ideal for professionals returning to the workforce and looking to re-engage in high-impact analytical work. You will collaborate closely with business stakeholders across geographies (Europe, US, Japan, Asia), working on payments and lending analytics. This is a high-ownership, high-impact role requiring a passion for data, a knack for storytelling through dashboards, and the ability to work independently in a fast-paced environment. Key Responsibilities: Design and maintain dashboards, reports, and metrics to support executive-level business decision-making Ensure data accuracy and integrity across tools, dashboards, and reporting pipelines Use SQL, Excel, and scripting languages (e.g. Python, R, Java) for deep-dive analysis Develop BI tools and data visualizations using platforms like Tableau, AWS QuickSight, Looker, etc. Analyze business trends and apply statistical rigor (t-tests, chi-squared tests, regressions, forecasting) to derive insights Lead alignment and standardization of key metrics across global BI teams Model data and metadata to support robust analytics infrastructure Automate manual reporting efforts to enhance operational efficiency Work with cross-functional teams to recommend data-driven growth strategies Present insights and narratives to stakeholders including global leaders and executives A Day in the Life: Define and refine performance metrics, reports, and insights for international payment systems Drive analytical alignment with global BI leaders and executive stakeholders Lead deep dives into metrics influencing revenue, signups, and operational performance Own VP- and Director-level reporting initiatives and decision-support analysis Collaborate across regions to deliver unified and actionable analytics strategies Basic Qualifications: 2+ years of experience in data analytics using Redshift, Oracle, NoSQL, or similar data sources Strong SQL skills for data retrieval and analysis Proficiency in data visualization using Tableau , QuickSight , Power BI , or similar tools Comfort with scripting languages like Python , Java , or R Experience applying statistical techniques to real-world data problems Preferred Qualifications: Masters degree or other advanced technical degree Experience with data modeling and data pipeline architecture Strong grasp of statistical analysis techniques, including correlation analysis and hypothesis testing Top 10 Must-Have Skills: Advanced SQL Data Visualization (Tableau, QuickSight, Power BI, Looker) Statistical Analysis (t-test, Chi-squared, Regression) Scripting (Python / R / Java) Redshift / Oracle / NoSQL Databases Dashboard & Report Development Data Modeling & Pipeline Design Cross-functional Global Collaboration Business Metrics & KPI Definition Executive-Level Reporting
Posted 1 month ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
9.0 - 14.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -2 (Experience 2-5 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 month ago
4.0 - 10.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Overview About the job Do you have hands-on experience with data engineering, and data architecting? Are you familiar with metadata management and associated processes? We’re looking for an expert communicator with a strong customer orientation and object-oriented programming experience to join our Corporate Technology and Security Engineering team as a Senior Data Engineer. In this role, you’ll design, develop and implement data models, ETL pipelines and warehouses for our internal applications and system. Additionally, you will provide architectural assessments, strategies and roadmaps; verify performance, fault tolerance and security. If you’re craving an exciting new opportunity where you can partner with project managers and other business leaders to facilitate projects that make good use of your data insights, let’s chat! CIMS is a high-growth Software-as-a-Service (SaaS) company. We are the industry's premier recruitment software provider, delivering technology that supports more than 3,500 contracted customers around the globe. Committed to both growth and stability, we have a lot of opportunities for career advancement within our organization. Come grow with us—apply today! iCIMS is a high-growth Software-as-a-Service (SaaS) company headquartered in Holmdel, NJ. We are the industry’s #1 recruitment software provider, delivering technology that supports approximately 4,000 contracted customers around the globe. Dedicated to maintaining an inclusive, inspirational and innovative work environment, and committed to our consistent growth, we have a wide range of opportunity for career advancement within our organization. Come grow with us—apply today! Responsibilities Develops and delivers long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. Creates short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. Establishes methods and procedures for tracking data quality, completeness, redundancy, and improvement. Creates strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. Design, Develop and Support ETL pipelines Oversees the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. Collaborates with project managers and business unit leaders for all projects involving CRM and downstream data. Addresses data-related problems in regards to systems integration, compatibility, and multiple-platform integration. key components as needed to create testing criteria in order to guarantee the fidelity and performance of data analytics solutions. Implements data management processes, procedures, and decision support. Optimize and monitor data pipelines feeding data stores or repositories. Work with data governance, customer success and product reporting teams to build out advanced analytics and reporting dashboards leveraging tools such as tableau, kibana, etc. Research emerging trends and best of breed solutions for data modeling, data contextualization, and predictive analytics. Proficient understanding of distributed computing principles. Qualifications A minimum of 5 years relevant experience. Hands-on knowledge of data modeling, data profiling or data parsing. Experience in Azure Data Warehousing, Azure Data Factory, SSIS, SSAS, ETL Familiarity with metadata management and associated processes. Demonstrated expertise with repository creation, and data and information system life cycle methodologies. Experience with data processing flowcharting techniques. Ability to manage data and metadata migration. Programming experience with Python Expert in writing SQL and Stored Procedures Experience in SFDC, NetSuite, Adaptive , Concur APIs highly desirable. Knowledge of AWS, GCP and Big Data, Red Shift is desirable Experience with integration platforms such as workato is a plus Excellent client/user interaction skills to determine requirements. Strong customer orientation focus and success in creating a superior customer experience. Good knowledge of applicable data privacy practices and laws. Understanding of Web services (SOAP, XML, UDDI, WSDL) Experience in defining, classifying, and, maintenance of MDM across an evolving set of SaaS interfaces.
Posted 1 month ago
0.0 - 1.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Description Summary We are looking to grow our Software Controls and Optimization team at GE Aerospace Research & are looking for top notch researchers to be part of this exciting journey. As a group, we innovate and execute on the R&D strategy for GE Aerospace on a range of problems from designing inspections solutions for aircraft engines to building predictive/prescriptive analytics for a variety of applications that improve process efficiencies in the business. Job Description Company Overview Working at GE Aerospace means you are bringing your unique perspective, innovative spirit, drive, and curiosity to a collaborative and diverse team working to advance aerospace for future generations. If you have ideas, we will listen. Join us and see your ideas take flight! Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is our multidisciplinary research and engineering center. Engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview We are looking for highly motivated people with proven track record to conduct research in natural language processing, artificial intelligence, and machine learning. As a Research Intern, you will be working with scientists in GE Aerospace Research to develop search and recommendation systems to improve the productivity of our Engineering teams. Your responsibilities will include developing and implementing algorithms to process Aerospace domain data for recommendation systems, design experiments, conduct thorough evaluations, and document your work (e.g., publications, invention disclosures). It will also include effectively communicating your findings with the appropriate stakeholders. You will encounter and have an opportunity to tackle unique challenges posed by the data and problems in the Aerospace domain including data quality, highly domain specific vocabulary, and how to integrate AI solutions into our safety critical and regulated workflows and processes. Ideal candidate: Should have experience in machine learning Required Qualifications Enrolled in full-time Masters or PhD Degree program in Computer Science, Electronics, Industrial, Electrical, Mechanical or related Engineering field with specialization in Natural Language Processing, Machine Learning, AI or Statistics. At least one year of experience in conducting independent research. Proficient in implementing algorithms, data pipelines and solutions in Python. Self-starter, ability to work in ambiguous environments, and excellent communication skills. Desired Qualifications Enrolled in a PhD Degree program with at least three years of experience in conducting independent research. Previous experience in training, fine-tuning (including instruction-tuning), and deploying Large Language Models/ vision algorithms. Proven track record of publications at top AI conferences (or co-located workshops). Experience of working with problems and data from industrial domains (Aviation, Energy, Healthcare, Manufacturing, etc.). Strong foundations in design, analysis, and implementation of algorithms in different computing architectures. Humble: respectful, receptive, agile, eager to learn Transparent: shares critical information, speaks with candor, contributes constructively Focused: quick learner, strategically prioritizes work, committed Leadership ability: strong communicator, decision-maker, collaborative Problem solver: analytical-minded, challenges existing processes, critical thinker At GE Aerospace, we have a relentless dedication to the future of safe and more sustainable flight and believe in our talented people to make it happen. Here, you will have the opportunity to work on really cool things with really smart and collaborative people. Together, we will mobilize a new era of growth in aerospace and defense. Where others stop, we accelerate. Additional Information Relocation Assistance Provided: Yes
Posted 1 month ago
9.0 - 13.0 years
32 - 40 Lacs
Ahmedabad
Remote
About the Role: We are looking for a hands-on AWS Data Architect OR Lead Engineer to design and implement scalable, secure, and high-performing data solutions. This is an individual contributor role where you will work closely with data engineers, analysts, and stakeholders to build modern, cloud-native data architectures across real-time and batch pipelines. Experience: 715 Years Location: Fully Remote Company: Armakuni India Key Responsibilities: Data Architecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. Data Modeling: Create and manage logical, physical, and conceptual data models to support various business applications and analytics. Database Design: Design and implement database solutions, including data warehouses, data lakes, and operational databases. Data Integration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. Data Governance: Implemented enforce data governance policies and procedures to ensure data quality, consistency, and security. Technology Evaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the worlds leading brands Documentation: Create and maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. Performance Tuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Required Skills: Helping project teams with solutions architecture, troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15 years of experience in data architecture or related roles. Experience with big data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertise with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of data integration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery). Experience with data governance frameworks and tools.
Posted 1 month ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights
Posted 1 month ago
2.0 - 5.0 years
4 - 7 Lacs
Ahmedabad
Work from Office
Roles and Responsibility : Collaborate with stakeholders to understand business requirements and data needs. Translate business requirements into scalable and efficient data engineering solutions. Design, develop, and maintain data pipelines using AWS serverless technologies. Implement data modeling techniques to optimize data storage and retrieval processes. Develop and deploy data processing and transformation frameworks for real-time and batch processing. Ensure data pipelines are scalable, reliable, and performant for large-scale data sizes. Implement data documentation and observability tools and practices to monitor...
Posted 1 month ago
2.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
2.0 - 6.0 years
4 - 8 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
4.0 - 8.0 years
25 - 30 Lacs
Pune
Hybrid
So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 8-11 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
4.0 - 8.0 years
20 - 32 Lacs
Hyderabad, Gurugram
Work from Office
• Designing, developing & deploying cloud-based data platforms using (AWS) • Integrating & processing structured & unstructured data from various sources • Troubleshooting data platform issues Watsapp (ANUJ - 8249759636) for more details.
Posted 1 month ago
2.0 - 4.0 years
4 - 7 Lacs
Mumbai
Work from Office
About The Role Grade M1 to M3 Reports to: Head Analytics Is a Team leader? No Team Size: -- Role Business Analyst/Sr Business Analyst Business: Not Applicable Department: Analytics Sub-Department Not Applicable Location: Mumbai Role We are seeking a highly analytical and detail-oriented individual to join our team as a Data Analysis and Insights Specialist. In this role, you will be responsible for analyzing complex datasets, extracting meaningful insights, and proposing actionable recommendations to optimize processes and decision-making. Your ability to conduct thorough cost-benefit analyses and effectively communicate findings to stakeholders will be crucial in driving informed business strategies Key Responsibilities Data Analysis & Insights Generation Utilize statistical techniques to interpret data, identify trends, and uncover insights that drive operational efficiencies and strategic decision-making Derive actionable insights from data analysis to address business challenges, improve performance, and capitalize on opportunities Cost-Benefit Analysis Conduct comprehensive cost-benefit analyses to evaluate proposed solutions and ensure alignment with organizational objectives and financial goals Develop well-supported recommendations based on data-driven insights and cost-benefit assessments, ensuring clarity and feasibility Continuous Improvement Stay updated with industry trends, best practices, and evolving analytical techniques to enhance data analysis capabilities and contribute to continuous improvement initiatives Stakeholder Communication Present findings, recommendations, and analyses to stakeholders in a clear, concise manner, fostering understanding and buy-in for proposed solutions Qualifications MBA Finance or MBA Business Analytics with up to 2.5 years of relevant experience Role Proficiencies Must Have Skills Ms Excel & Powerpoint Python/SAS & SQL Insight generation Strong Communication & Presentation skills CBA or Impact Analysis Good to have Skills BI tools (Tableau/PowerBI) Cloud Data Analytics solutions (Sagemaker/Azure) Cloud Data warehousing solutions (Redshift/Snowflake)
Posted 1 month ago
5.0 - 10.0 years
5 - 15 Lacs
Gurugram
Hybrid
IntraEdge is looking for BigData Engineers/developers who will work on the collecting, storing, processing, and analyzing of huge sets of data. One will also be responsible for integrating them with the architecture used across the company. Responsibilities- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities. Partners with architects and other senior leads to address the data needs Partners with Data Scientists and product teams to build and deploy machine learning models that unlock growth Build custom integration and data pipelines between cloud-based systems using APIs Write complex and efficient code to transform raw data sources into easily accessible models by coding several languages such as Python, Scala or SQL . Design, develop and test a large-scale, custom-distributed software system using the latest Java, Scala and Big data technologies. Actively contribute to the technological strategy definition (design, architecture and interfaces) in order to effectively respond to our client's business needs Participate in technological watch and the definition of standards to ensure that our systems and data warehouses are efficient, resilient and durable Experienced in using Informatica or similar products, with an understanding of heterogeneous data replication techniques Build data expertise and own data quality for the pipelines you create. Skills and Qualifications- Bachelor/Masters degree in Computer Science, Management of Information Systems or equivalent. 4 or more years of relevant software engineering experience ( Big Data: Hive, Spark, Kafka, Cassandra, Scala, Python, SQL ) in a data-focused role. Experience in GCP Building batch/streaming ETL pipelines with frameworks like Spark, Spark Streaming and Apache Beam and working with messaging systems like Pub/Sub and Kafka . Working experience with Java tools or Apache Camel. Experience in designing and building highly scalable and reliable data pipelines using Big Data ( Airflow, Python, Redshift/Snowflake ) Software development experience with proficiency in Python, Java, Scala or another language. Good knowledge of Big Data querying tools, such as Hive, and experience with Spark/PySpark Good knowledge of SQL, Good Knowledge of Python Ability to analyse and obtain insights from complex/large data sets Design and develop highly performing SQL Server database objects Experience- 5-10 Years Notice period- Serving NP/Immediate joiners/Max 30 days Location- Gurugram/Bangalore/Pune/Remote Salary- Decent hike on Current CTC
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise AWS Data Vault 2.0 development mechanism for agile data ingestion, storage and scaling Databricks for complex queries on transformation, aggregation, business logic implementation aggregation, business logic implementation AWS Redshift and Redshift spectrum, for complex queries on transformation, aggregation, business logic implementation DWH Concept on star schema, Materialize view concept. Strong SQL and data manipulation/transformation skills Preferred technical and professional experience Robust and Scalable Cloud Infrastructure End-to-End Data Engineering Pipeline Versatile Programming Capabilities
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Noida
Work from Office
Analytics - Risk Product Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the Role: We seek an experienced Assistant General Manager - Analytics for data analysis and reporting across our lending verticals. The ideal candidate will use SQL and dashboarding tools to deliver actionable insights and manage data needs for multiple lending verticals. A drive to implement AI to automate repetitive workflows is essential. Key Responsibilities: Develop, maintain, and automate reporting and dashboards for lending vertical KPIs. Manage data and analytics requirements for multiple lending verticals. Collaborate with stakeholders to understand data needs and provide support. Analyze data trends to provide insights and recommendations. Design and implement data methodologies to improve data quality. Ensure data accuracy and integrity. Communicate findings to technical and non-technical audiences. Stay updated on data analytics trends and identify opportunities for AI implementation. Drive the use of AI to automate repetitive data workflows. Qualifications Bachelor's degree in a quantitative field. 5-7 years of data analytics experience. Strong SQL and Pyspark proficiency. Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Lending/financial services experience is a plus. Excellent analytical and problem-solving skills. Strong communication and presentation skills. Ability to manage multiple projects. Ability to work independently and in a team. Demonstrated drive to use and implement AI for automation. Preferred Qualifications Experience with statistical modeling and data mining. Familiarity with cloud data warehousing (e.g., Snowflake, BigQuery, Redshift). Experience with Python or R. Experience implementing AI solutions in a business setting Why Join us Bragging rights to be behind the largest fintech lending play in India A fun, energetic and a once-in-a-lifetime environment that enables you to achieve your best possible outcome in your career With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants - and we are committed to it. India s largest digital lending story is brewing here. It s your opportunity to be a part of the story!
Posted 1 month ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
17 - 22 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough