Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at GoKwik, you will play a crucial role in collaborating with product managers, data scientists, business intelligence teams, and software development engineers to develop and implement data-driven strategies throughout the organization. Your responsibilities will revolve around identifying and executing process enhancements, data model and architecture creation, pipeline development, and data application deployment. Your focus will be on continuously improving data optimization processes, ensuring data quality and security, and creating new data models and pipelines as necessary. You will strive for high performance, operational excellence, accuracy, and reliability within the system. Utilizing cutting-edge tools and technologies, you will establish a data architecture that supports new initiatives and future products, while maintaining easily maintainable and reusable test-driven products and pipelines. Your role will also involve building infrastructure for data extraction, transformation, and loading from a diverse range of data sources, as well as maintaining data foundations that support the marketing and sales functions. By increasing automation and implementing scalable analytic solutions, you will help meet the business requirements effectively. To excel in this position, you should hold a degree in Computer Science, Mathematics, or related fields, along with a minimum of 4 years of experience in Data Engineering. Proficiency in SQL, relational databases, data pipeline development, Python, and cloud services like AWS is essential. Strong communication skills, problem-solving abilities, and the capability to work in dynamic environments are key traits we are looking for in the ideal candidate. At GoKwik, we value independence, resourcefulness, analytical thinking, and effective problem-solving skills. We seek individuals who can adapt to change, thrive in challenging situations, and communicate effectively. If you are excited about taking on impactful challenges within a fast-growing entrepreneurial environment, we welcome you to join our team and be part of our journey towards innovation and excellence.,
Posted 2 weeks ago
7.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Bengaluru
Hybrid
Job Role: Backend and Data Pipeline Engineer Location: Hyderabad/Bangalore(Hybrid) Job Type : Fulltime ** Only Immediate Joiners ** Job Summary: The Team: Were investing in technology to develop new products that help our customers drive their growth and transformation agenda. These include new data integration, advanced analytics, and modern applications that address new customer needs and are highly visible and strategic within the organization. Do you love building products on platforms at scale while leveraging cutting edge technology? Do you want to deliver innovative solutions to complex problems? If so, be part of our mighty team of engineers and play a key role in driving our business strategies. The Impact: We stand at cross-roads of innovation through Data Products to bring a competitive advantage to our business through the delivery of automotive forecasting solutions. Your work will contribute to the growth and success of our organization and provide valuable insights to our clients. Whats in it for you: We are looking for an innovative and mission-driven software\data engineer to make a significant impact by designing and developing AWS cloud native solutions that enables analysts to forecast long and short-term trends in the automotive industry. This role requires cutting edge data and cloud native technical expertise as well as the ability to work independently in a fast-paced, collaborative, and dynamic work environment. Responsibilities: Design, develop, and maintain scalable data pipelines including complex algorithms Build and maintain UI backend services using Python or C# or similar, ensuring responsiveness and high performance Ensure data quality and integrity through robust validation processes Strong understanding of data integration and data modeling concepts Lead data integration projects and mentor junior engineers Collaborate with cross-functional teams to gather data requirements Collaborate with data scientists and analysts to optimize data flow and storage for advanced analytics Take ownership of the modules you work on, deliver on time and with quality, ensure software development best practices Utilize Redis for caching and data storage solutions to enhance application performance. What Were Looking For : Bachelors degree in computer science, or a related field Strong analytical and problem-solving skills 7+ years of experience in Data Engineering/Advanced Analytics Proficiency in Python and experience with Flask for backend development. Strong knowledge of object-oriented programming. AWS Proficiency is a big plus: ECR, Containers
Posted 2 weeks ago
9.0 - 12.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Primarily looking for a candidate with strong expertise in data-related skills, including : - SQL & Database Management : Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake. - ETL/ELT Tools : Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines - Data Modeling & Optimization : Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. - Cloud & Security : Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). - Data Warehousing : Experience managing large datasets, data marts, and optimizing databases for performance. - Agile & CI/CD : Knowledge of Agile methodologies and CI/CD automation tools. Imp : The candidate should have a strong data engineering background with hands-on experience in handling large volumes of data, data pipelines, and cloud-based data systems Responsibilities : - Build the data pipeline for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud database technologies. - Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs. - Work with data and analytics experts to strive for greater functionality in our data systems. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Ability to quickly analyze existing SQL code and make improvements to enhance performance, take advantage of new SQL features, close security gaps, and increase robustness and maintainability of the code. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability, etc. - Unit Test databases and perform bug fixes. - Develop best practices for database design and development activities. - Take on technical leadership responsibilities of database projects across various scrum teams. - Manage exploratory data analysis to support dashboard development (desirable) Required Skills : - Strong experience in SQL with expertise in relational database(PostgreSQL preferrable cloud hosted in AWS/Azure/GCP) or any cloud-based Data Warehouse (like Snowflake, Azure Synapse). - Competence in data preparation and/or ETL/ELT tools like SnapLogic, StreamSets, DBT, etc. (preferably strong working experience in one or more) to build and maintain complex data pipelines and flows to handle large volume of data. - Understanding of data modelling techniques and working knowledge with OLAP systems - Deep knowledge of databases, data marts, data warehouse enterprise systems and handling of large datasets. - In-depth knowledge of ingestion techniques, data cleaning, de-dupe, etc. - Ability to fine tune report generating queries. - Solid understanding of normalization and denormalization of data, database exception handling, profiling queries, performance counters, debugging, database & query optimization techniques. - Understanding of index design and performance-tuning techniques - Familiarity with SQL security techniques such as data encryption at the column level, Transparent Data Encryption(TDE), signed stored procedures, and assignment of user permissions - Experience in understanding the source data from various platforms and mapping them into Entity Relationship Models (ER) for data integration and reporting(desirable). - Adhere to standards for all database e.g., Data Models, Data Architecture and Naming Conventions - Exposure to Source control like GIT, Azure DevOps - Understanding of Agile methodologies (Scrum, Itanban)
Posted 2 weeks ago
4.0 - 8.0 years
0 - 0 Lacs
Pune
Hybrid
So, what’s the role all about? Within Actimize, the AI and Analytics Team is developing the next generation advanced analytical cloud platform that will harness the power of data to provide maximum accuracy for our clients’ Financial Crime programs. As part of the PaaS/SaaS development group, you will be responsible for developing this platform for Actimize cloud-based solutions and to work with cutting edge cloud technologies. How will you make an impact? NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions & has been consistently ranked as number one in the space At NICE Actimize, we recognize that every employee’s contributions are integral to our company’s growth and success. To find and acquire the best and brightest talent around the globe, we offer a challenging work environment, competitive compensation, and benefits, and rewarding career opportunities. Come share, grow and learn with us – you’ll be challenged, you’ll have fun and you’ll be part of a fast growing, highly respected organization. This new SaaS platform will enable our customers (some of the biggest financial institutes around the world) to create solutions on the platform to fight financial crime. Have you got what it takes? Design, implement, and maintain real-time and batch data pipelines for fraud detection systems. Automate data ingestion from transactional systems, third-party fraud intelligence feeds, and behavioral analytics platforms. Ensure high data quality, lineage, and traceability to support audit and compliance requirements. Collaborate with fraud analysts and data scientists to deploy and monitor machine learning models in production. Monitor pipeline performance and implement alerting for anomalies or failures. Ensure data security and compliance with financial regulations Qualifications: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. 4-6 years of experience in DataOps role, preferably in fraud or risk domains. Strong programming skills in Python and SQL. Knowledge of financial fraud patterns, transaction monitoring, and behavioral analytics. Familiarity with fraud detection systems, rules engines, or anomaly detection frameworks. Experience with AWS cloud platforms Understanding of data governance, encryption, and secure data handling practices. Experience with fraud analytics tools or platforms like Actimize What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7822 Reporting into: Director Role Type: Tech Manager
Posted 2 weeks ago
5.0 - 10.0 years
7 - 13 Lacs
Pune, Chennai, Bengaluru
Work from Office
Role & responsibilities Key Responsibilities: Develop, test, and deploy robust Dashboards and reports in Power BI using SAP HANA and Snowflake Datasets Basic Qualifications Excellent verbal and written communication skills 5+ years of experience working with Power BI with SAP HANA and Snowflake Datasets 5+ hands-on experience in developing moderate to complex ETL data pipelines is a plus 5+ years of hands-on experience with ability to resolve complex SQL query performance issues. 5+ years of ETL Python development experience; experience parallelizing pipelines a plus Demonstrated ability to troubleshoot complex query, pipeline, and data quality issues
Posted 2 weeks ago
8.0 - 13.0 years
18 - 20 Lacs
Noida
Remote
Job Title: Cloud Data Architect Location: 100% Remote Time: Over-lap US CST hours Duration: 6+ Months Job Description: Seeking candidates with at least 5-7 years' experience. Strong data architect, done data engineering and data architect previously. Strong SQL and Snowflake experience required. Manufacturing industry is a must have Snowflake / SQL Architect Architect and manage scalable data solutions using Snowflake and advanced SQL, optimizing performance for analytics and reporting. Design and implement data pipelines, data warehouses, and data lakes, ensuring efficient data ingestion and transformation. Develop best practices for data security, access control, and compliance within cloud-based data environments. Collaborate with cross-functional teams to understand business needs and translate them into robust data architectures. Evaluate and integrate third-party tools and technologies to enhance the Snowflake ecosystem and overall data strategy. Thanks & Regards: Abhinav Krishna Srivastava Technical Resource Specialist Email: asrivastava@fcsltd.com
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Noida
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
8.0 - 10.0 years
9 - 13 Lacs
Gurugram
Remote
Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.
Posted 2 weeks ago
7.0 - 10.0 years
10 - 14 Lacs
Gurugram
Work from Office
About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.
Posted 2 weeks ago
10.0 - 17.0 years
15 - 25 Lacs
Hyderabad, Bengaluru
Work from Office
Detailed job description - Skill Set: Data Scientist/Sr. Data Scientist (8+ Years Experience) Job Title: Data Scientist / Senior Data Scientist Job Summary: As a Data Scientist, you will design and implement machine learning models and advanced analytics solutions to solve complex business problems. You will work with large-scale data, build predictive models, and collaborate with engineering and product teams to deploy data-driven solutions. You will also lead initiatives in computer vision and deploy models using AWS services . Key Responsibilities: Develop and deploy machine learning models for classification, regression, clustering, and recommendation systems. Design and implement computer vision solutions using CNNs, object detection, and image segmentation techniques. Deploy machine learning models using AWS services such as SageMaker, Lambda, S3, and API Gateway. Perform feature engineering, model selection, and hyperparameter tuning. Analyze experimental results and iterate on model improvements. Collaborate with data engineers to ensure data pipelines are robust and scalable. Communicate technical findings to non-technical stakeholders. Stay updated with the latest research and trends in data science and AI. Required Skills & Qualifications: Masters or PhD in Computer Science, Statistics, Mathematics, or a related field. Strong programming skills in Python (pandas, scikit-learn, TensorFlow, PyTorch). Solid understanding of machine learning algorithms and statistical modeling. Hands-on experience with computer vision libraries such as OpenCV, TensorFlow, or PyTorch. Experience deploying models on AWS using services like SageMaker, Lambda, and S3. Familiarity with big data tools (e.g., Spark, Hadoop) and cloud platforms. Proficiency in SQL and data wrangling. Strong communication and collaboration skills. Mandatory Skills Data Scientist with Strong hands on skills in AI, ML models , computer Vision solutions, AWS Services , Python , Data Pipeline.
Posted 2 weeks ago
8.0 - 10.0 years
9 - 13 Lacs
Noida
Work from Office
Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.
Posted 2 weeks ago
7.0 - 10.0 years
10 - 14 Lacs
Noida
Work from Office
About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.
Posted 2 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Noida
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
4.0 - 9.0 years
18 - 32 Lacs
Noida, Kolkata, Pune
Work from Office
Description - External Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of AIML Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system. Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities • Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products • Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers • Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services • Build and implement machine learning models and prototype solutions for proof-of-concept • Scale existing ML models into production on a variety of cloud platforms • Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills • Bachelor's degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field • Master’s degree is a plus • Integration – APIs, micro-services and ETL/ELT patterns • DevOps (Good to have) – Ansible, Jenkins, ELK • Containerization – Docker, Kubernetes etc • Orchestration – Airflow, Step Functions, Ctrl M etc • Languages and scripting: Python, Scala Java etc • Cloud Services - AWS, GCP, Azure and Cloud Native • Analytics and ML tooling – Sagemaker, ML Studio • Execution Paradigm – low latency/Streaming, batch Preferred Qualifications/ Skills • Data platforms – Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery, Snowflake etc.) • Visualization Tools - PowerBI, Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
1.0 - 7.0 years
3 - 9 Lacs
Pune
Work from Office
Required Skills and Qualifications- Bachelor degree in Computer Science, Information Technology, or a related field. Hands on experience in data pipeline testing, preferably in a cloud environment. Strong experience with Google Cloud Platform services, especially BigQuery Proficient in working with Kafka, Hive, Parquet files, and Snowflake. Expertise in Data Quality Testing and metrics calculations for both batch and streaming data. Excellent programming skills in Python and experience with test automation. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills.
Posted 2 weeks ago
7.0 - 12.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 2 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Role Description: As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities : Data Architecture and Design : Design and implement scalable and efficient data architectures to support the organization's data processing needs Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives ETL Development : Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation Big Data Technology - Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy Implement and optimize big data technologies to process and analyze large datasets efficiently Cloud Integration: Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance. Performance Monitoring and Optimization : Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues Optimize data processing. Documentation : Maintain comprehensive documentation for data engineering processes, data models, and system architecture Ensure that team members follow documentation standards and best practices. Collaboration and Communication : Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 6-8 years of professional experience in data engineering In-depth knowledge of data modeling, ETL processes, and data warehousing. In-depth knowledge of building the data warehouse using Snowflake Should have experience in data ingestion, data lakes, data mesh and data governance Must have experience in Python programming Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka. Experience with cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools .Excellent problem-solving and analytical skills .Strong communication and interpersonal skills. Proven ability to work collaboratively in a fast-paced, dynamic environment.
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Ahmedabad, Remote
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Pune, Anywhere in /Multiple Locations
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks.- Architect scalable and efficient data models and storage solutions on the Databricks platform.- Collaborate with architects and other teams to migrate current solution to use Databricks.- Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.- Use best practices for data governance, security, and compliance on the Databricks platform.- Mentor junior engineers and provide technical guidance.- Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field.- 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform.- Proficiency in programming languages such as Python, Scala, or SQL.- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.- Excellent problem-solving skills and attention to detail.- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.- Good to have experience with containerization technologies such as Docker and Kubernetes.- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
Pune
Work from Office
About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 2 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Ahmedabad
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
7.0 - 10.0 years
9 - 12 Lacs
Pune
Work from Office
About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Ahmedabad
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Pune
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
6.0 - 9.0 years
9 - 13 Lacs
Ahmedabad
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough