Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You are sought after by NTT DATA, a global innovator in business and technology services, to join as a Snowflake Engineer - Digital Solution Consultant Sr. Analyst in Hyderabad, Telangana (IN-TG), India (IN). The ideal candidate is expected to possess experience with cloud data warehousing solutions, knowledge of big data technologies like Apache Spark and Hadoop, familiarity with CI/CD pipelines, DevOps practices, as well as data visualization tools. NTT DATA, a trusted global innovator with a revenue of $30 billion, caters to 75% of the Fortune Global 100 companies. Committed to aiding clients in innovation, optimization, and transformation for long-term success, NTT DATA boasts a diverse team of experts in over 50 countries and a robust partner ecosystem. Their services range from business and technology consulting to data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. As a leading provider of digital and AI infrastructure globally, NTT DATA is part of the NTT Group, investing over $3.6 billion annually in R&D to facilitate organizations and society in confidently transitioning into the digital future. To learn more, visit us at us.nttdata.com.,
Posted 21 hours ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
You are seeking a Lead - Python Developer / Tech Lead to take charge of backend development and oversee a team handling enterprise-grade, data-driven applications. In this role, you will have the opportunity to work with cutting-edge technologies such as FastAPI, Apache Spark, and Lakehouse architectures. Your responsibilities will include leading the team, making technical decisions, and ensuring timely project delivery in a dynamic work environment. Your primary duties will involve mentoring and guiding a group of Python developers, managing task assignments, maintaining code quality, and overseeing technical delivery. You will be responsible for designing and implementing scalable RESTful APIs using Python and FastAPI, as well as managing extensive data processing tasks using Pandas, NumPy, and Apache Spark. Additionally, you will drive the implementation of Lakehouse architectures and data pipelines, conduct code reviews, enforce coding best practices, and promote clean, testable code. Collaboration with cross-functional teams, including DevOps and Data Engineering, will be essential. Furthermore, you will be expected to contribute to CI/CD processes, operate in Linux-based environments, and potentially work with Kubernetes or MLOps tools. To excel in this role, you should possess 9-12 years of total experience in software development, with a strong command of Python, FastAPI, and contemporary backend frameworks. A profound understanding of data engineering workflows, Spark, and distributed systems is crucial. Experience in leading agile teams or fulfilling a tech lead position is beneficial. Proficiency in unit testing, Linux, and working in cloud/data environments is required, while exposure to Kubernetes, ML Pipelines, or MLOps would be advantageous.,
Posted 23 hours ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Research Scientist at Adobe, you will have the opportunity to engage in cutting-edge research within the Media and Data Science Research Laboratory. Your role will involve designing, implementing, and optimizing machine learning algorithms to address real-world problems related to understanding user behavior and enhancing marketing performance. You will also be responsible for developing scalable data generation techniques, analyzing complex data from various sources, and running experiments to study data quality. Collaboration will be a key aspect of your role, as you will work closely with product, design, and engineering teams to prototype and transition research concepts into production. Your expertise in areas such as Large Language Models, Computer Vision, Natural Language Processing, and Recommendation Systems will be crucial in driving innovative solutions that redefine how businesses operate. To thrive in this role, you should have a proven track record of empirical research, experience deploying solutions in production environments, and the ability to derive actionable insights from large datasets. A degree in Computer Science, Statistics, Economics, or a related field is required, along with a knack for taking research risks and solving complex problems independently. At Adobe, we prioritize diversity, respect, and equal opportunity, recognizing that valuable insights can come from any team member. If you are a motivated and versatile individual with a passion for transforming digital experiences, we encourage you to join our ambitious team and contribute to the future of technology innovation.,
Posted 23 hours ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
As a Data Modeler, your primary responsibility will be to design and develop conceptual, logical, and physical data models supporting enterprise data initiatives. You will work with modern storage formats like Parquet and ORC, and build and optimize data models within Databricks Unity Catalog. Collaborating with data engineers, architects, analysts, and stakeholders, you will ensure alignment with ingestion pipelines and business goals. Translating business and reporting requirements into robust data architecture, you will follow best practices in data warehousing and Lakehouse design. Your role will involve maintaining metadata artifacts, enforcing data governance, quality, and security protocols, and continuously improving modeling processes. You should have over 10 years of hands-on experience in data modeling within Big Data environments. Your expertise should include OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices. Proficiency in modeling methodologies like Kimball, Inmon, and Data Vault is essential. Hands-on experience with modeling tools such as ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Experience in Databricks with Unity Catalog and Delta Lake is required, along with a strong command of SQL and Apache Spark for querying and transformation. Familiarity with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database, is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are necessary for this role, as well as the ability to work in cross-functional agile environments. A Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field is required. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are a plus. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks like GDPR and HIPAA are advantageous.,
Posted 23 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Big Data Architect specializing in Databricks at Codvo, a global empathy-led technology services company, your role is critical in designing sophisticated data solutions that drive business value for enterprise clients and power internal AI products. Your expertise will be instrumental in architecting scalable, high-performance data lakehouse platforms and end-to-end data pipelines, making you the go-to expert for modern data architecture in a cloud-first world. Your key responsibilities will include designing and documenting robust, end-to-end big data solutions on cloud platforms (AWS, Azure, GCP) with a focus on the Databricks Lakehouse Platform. You will provide technical guidance and oversight to data engineering teams on best practices for data ingestion, transformation, and processing using Spark. Additionally, you will design and implement effective data models and establish data governance policies for data quality, security, and compliance within the lakehouse. Evaluating and recommending appropriate data technologies, tools, and frameworks to meet project requirements and collaborating closely with various stakeholders to translate complex business requirements into tangible technical architecture will also be part of your role. Leading and building Proof of Concepts (PoCs) to validate architectural approaches and new technologies in the big data and AI space will be crucial. To excel in this role, you should have 10+ years of experience in data engineering, data warehousing, or software engineering, with at least 4+ years in a dedicated Data Architect role. Deep, hands-on expertise with Apache Spark and the Databricks platform is mandatory, including Delta Lake, Unity Catalog, and Structured Streaming. Proven experience architecting and deploying data solutions on major cloud providers, proficiency in Python or Scala, expert-level SQL skills, strong understanding of modern AI concepts, and in-depth knowledge of data warehousing concepts and modern Lakehouse patterns are essential. This position is remote and based in India with working hours from 2:30 PM to 11:30 PM. Join us at Codvo and be a part of a team that values Product innovation, mature software engineering, and core values like Respect, Fairness, Growth, Agility, and Inclusiveness each day to offer expertise, outside-the-box thinking, and measurable results.,
Posted 1 day ago
15.0 - 19.0 years
0 Lacs
hyderabad, telangana
On-site
As a Technical Lead / Data Architect, you will play a crucial role in our organization by leveraging your expertise in modern data architectures, cloud platforms, and analytics technologies. In this leadership position, you will be responsible for designing robust data solutions, guiding engineering teams, and ensuring successful project execution in collaboration with the project manager. Your key responsibilities will include architecting and designing end-to-end data solutions across multi-cloud environments such as AWS, Azure, and GCP. You will lead and mentor a team of data engineers, BI developers, and analysts to deliver on complex project deliverables. Additionally, you will define and enforce best practices in data engineering, data warehousing, and business intelligence. You will design scalable data pipelines using tools like Snowflake, dbt, Apache Spark, and Airflow, and act as a technical liaison with clients, providing strategic recommendations and maintaining strong relationships. To be successful in this role, you should have at least 15 years of experience in IT with a focus on data architecture, engineering, and cloud-based analytics. You must have expertise in multi-cloud environments and cloud-native technologies, along with deep knowledge of Snowflake, Data Warehousing, ETL/ELT pipelines, and BI platforms. Strong leadership and mentoring skills are essential, as well as excellent communication and interpersonal abilities to engage with both technical and non-technical stakeholders. In addition to the required qualifications, certifications in major cloud platforms and experience in enterprise data governance, security, and compliance are preferred. Familiarity with AI/ML pipeline integration would be a plus. We offer a collaborative work environment, opportunities to work with cutting-edge technologies and global clients, competitive salary and benefits, and continuous learning and professional development opportunities. Join us in driving innovation and excellence in data architecture and analytics.,
Posted 1 day ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 9+ years of experience and be located in Chennai. You must possess in-depth knowledge of Python and have good experience in creating APIs using FastAPI. It is essential to have exposure to data libraries like Pandas, DataFrame, NumPy etc., as well as knowledge in Apache open-source components. Experience with Apache Spark, Lakehouse architecture, and Open table formats is required. You should also have knowledge in automated unit testing, preferably using PyTest, and exposure in distributed computing. Experience working in a Linux environment is necessary, and working knowledge in Kubernetes would be an added advantage. Basic exposure to ML and MLOps would also be advantageous.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at Lifesight, you will play a crucial role in the Data and Business Intelligence organization by focusing on deep data engineering projects. Joining the data platform team in Bengaluru, you will have the opportunity to contribute to defining the technical strategy and data engineering team culture in India. Your responsibilities will include designing and constructing data platforms and services, as well as managing data infrastructure in cloud environments to support strategic business decisions across Lifesight products. You will be expected to build highly scalable distributed data processing systems, data solutions, and data pipelines that optimize data quality and are resilient to poor-quality data sources. Additionally, you will own data mapping, business logic, transformations, and data quality, while participating in architecture discussions, influencing the product roadmap, and taking ownership of new projects. The ideal candidate for this role should possess proficiency in Python and PySpark, a deep understanding of Apache Spark, experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, and familiarity with distributed database systems. Experience working with various file formats like Parquet, Avro, and NoSQL databases, as well as AWS and GCP, is preferred. A minimum of 5 years of professional experience as a data or software engineer is required for this full-time position. If you are a self-starter who is passionate about data engineering, ready to work with big data technologies, and eager to collaborate with a team of engineers while mentoring others, we encourage you to apply for this exciting opportunity at Lifesight.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Engineer at our Pune location, you will play a critical role in designing, developing, and maintaining scalable data pipelines and architectures using Data bricks on Azure/AWS cloud platforms. With 6 to 9 years of experience in the field, you will collaborate with stakeholders to integrate large datasets, optimize performance, implement ETL/ELT processes, ensure data governance, and work closely with cross-functional teams to deliver accurate solutions. Your responsibilities will include building, maintaining, and optimizing data workflows, integrating datasets from various sources, tuning pipelines for performance and scalability, implementing ETL/ELT processes using Spark and Data bricks, ensuring data governance, collaborating with different teams, documenting data pipelines, and developing automated processes for continuous integration and deployment of data solutions. To excel in this role, you should have 6 to 9 years of hands-on experience as a Data Engineer, expertise in Apache Spark, Delta Lake, Azure/AWS Data bricks, proficiency in Python, Scala, or Java, advanced SQL skills, experience with cloud data platforms, data warehousing solutions, data modeling, ETL tools, version control systems, and automation tools. Additionally, soft skills such as problem-solving, attention to detail, and ability to work in a fast-paced environment are essential. Nice to have skills include experience with Data bricks SQL and Data bricks Delta, knowledge of machine learning concepts, and experience in CI/CD pipelines for data engineering solutions. Joining our team offers challenging work with international clients, growth opportunities, a collaborative culture, and global project involvement. We provide competitive salaries, flexible work schedules, health insurance, performance-based bonuses, and other standard benefits. If you are passionate about data engineering, possess the required skills and qualifications, and thrive in a dynamic and innovative environment, we welcome you to apply for this exciting opportunity.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Quality Engineer, your primary responsibility will be to analyze business and technical requirements to design, develop, and execute comprehensive test plans for ETL pipelines and data transformations. You will perform data validation, reconciliation, and integrity checks across various data sources and target systems. Additionally, you will be expected to build and automate data quality checks using SQL and/or Python scripting. It will be your duty to identify, document, and track data quality issues, anomalies, and defects. Collaboration is key in this role, as you will work closely with data engineers, developers, QA, and business stakeholders to understand data requirements and ensure that data quality standards are met. You will define data quality KPIs and implement continuous monitoring frameworks. Participation in data model reviews and providing input on data quality considerations will also be part of your responsibilities. In case of data discrepancies, you will be expected to perform root cause analysis and work with teams to drive resolution. Ensuring alignment to data governance policies, standards, and best practices will also fall under your purview. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field. Additionally, you should have 4 to 7 years of experience as a Data Quality Engineer, ETL Tester, or a similar role. A strong understanding of ETL concepts, data warehousing principles, and relational database design is essential. Proficiency in SQL for complex querying, data profiling, and validation tasks is required. Familiarity with data quality tools, testing methodologies, and modern cloud data ecosystems (AWS, Snowflake, Apache Spark, Redshift) will be advantageous. Moreover, advanced knowledge of SQL, data pipeline tools like Airflow, DBT, or Informatica, as well as experience with integrating data validation processes into CI/CD pipelines using tools like GitHub Actions, Jenkins, or similar, are desired qualifications. An understanding of big data platforms, data lakes, non-relational databases, data lineage, master data management (MDM) concepts, and experience with Agile/Scrum development methodologies will be beneficial for excelling in this role. Your excellent analytical and problem-solving skills along with a strong attention to detail will be valuable assets in fulfilling the responsibilities of a Data Quality Engineer.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking a Senior Data Engineer who is proficient in Azure Databricks, PySpark, and distributed computing to create and enhance scalable ETL pipelines specifically for manufacturing analytics. Your responsibilities will include working with industrial data to support real-time and batch data processing needs. Your role will involve constructing scalable real-time and batch processing workflows utilizing Azure Databricks, PySpark, and Apache Spark. You will be responsible for data pre-processing tasks such as cleaning, transformation, deduplication, normalization, encoding, and scaling to guarantee high-quality input for downstream analytics. Designing and managing cloud-based data architectures, like data lakes, lakehouses, and warehouses, following the Medallion Architecture, will also be part of your duties. You will be expected to deploy and optimize data solutions on Azure, AWS, or GCP, focusing on performance, security, and scalability. Developing and optimizing ETL/ELT pipelines for structured and unstructured data sourced from IoT, MES, SCADA, LIMS, and ERP systems and automating data workflows using CI/CD and DevOps best practices for security and compliance will also be essential. Monitoring, troubleshooting, and enhancing data pipelines for high availability and reliability, as well as utilizing Docker and Kubernetes for scalable data processing, will be key aspects of your role. Collaboration with automation teams will also be required for effective project delivery. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Technology, or a related field, with a specific requirement for IIT Graduates. You should possess at least 4 years of experience in data engineering with a focus on cloud platforms like Azure, AWS, or GCP. Proficiency in PySpark, Azure Databricks, Python, Apache Spark, and expertise in various databases (relational, time series, and NoSQL) is necessary. Experience in containerization tools like Docker and Kubernetes, strong analytical and problem-solving skills, familiarity with MLOps and DevOps practices, excellent communication and collaboration abilities, and the flexibility to adapt to a dynamic startup environment are desirable qualities for this role.,
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Technology Lead Analyst role at our organization involves working closely with the Technology team to establish and implement new or updated application systems and programs. Your primary responsibility will be to lead applications systems analysis and programming activities. As the Applications Development Technology Lead Analyst, you will collaborate with various management teams to ensure seamless integration of functions to achieve organizational goals. You will also be responsible for identifying necessary system enhancements for deploying new products and process improvements. Additionally, you will play a key role in resolving high-impact problems and projects by evaluating complex business processes and industry standards. Your expertise in applications programming will be crucial in ensuring that application design aligns with the overall architecture blueprint. You will need to have a deep understanding of system flow and develop coding, testing, debugging, and implementation standards. Furthermore, you will be expected to have a comprehensive knowledge of how different business areas integrate to achieve business objectives. In this position, you will provide in-depth analysis and innovative solutions to address issues effectively. You will also serve as an advisor or coach to mid-level developers and analysts, assigning work as needed. It is essential to assess risks carefully when making business decisions, with a focus on upholding the firm's reputation and complying with relevant laws and regulations. To qualify for this role, you should have 6-10 years of relevant experience in Apps Development or systems analysis. You must also possess extensive experience in system analysis and software application programming, along with a track record of managing and implementing successful projects. Being a Subject Matter Expert (SME) in at least one area of Applications Development will be advantageous. A Bachelor's degree or equivalent experience is required, while a Master's degree is preferred. The ability to adjust priorities swiftly, demonstrated leadership and project management skills, and clear written and verbal communication are also essential qualifications for this position. The job description provides an overview of the typical responsibilities associated with this role. As a Vice President (VP) in this capacity, you will lead a specific technical vertical (Frontend, Backend, or Data), mentor developers, and ensure timely, scalable, and testable delivery within your domain. Your responsibilities will include leading a team of engineers, translating architecture into execution, reviewing complex components, and driving data platform migration projects. Additionally, you will be expected to evaluate and implement AI-based tools for enhanced productivity, testing, and code improvement. The required skills for this role include having 10-14 years of experience in leading development teams, delivering cloud-native solutions, and proficiency in programming languages such as Java, Python, and JavaScript/TypeScript. Familiarity with frameworks like Spring Boot/WebFlux, Angular, Node.js, databases including Oracle and MongoDB, cloud technologies such as ECS, S3, Lambda, and Kubernetes, as well as data technologies like Apache Spark and Snowflake, are also essential. Strong mentoring, conflict resolution, and cross-team communication skills are important attributes for success in this position.,
Posted 1 day ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
As an ideal candidate for this role, you should possess in-depth knowledge of Python and have good experience in creating APIs using FastAPI. You should also have exposure to data libraries like Pandas, DataFrame, NumPy, as well as knowledge in Apache open-source components and Apache Spark. Familiarity with Lakehouse architecture and Open table formats is also desirable. Additionally, you should be well-versed in automated unit testing, preferably using PyTest, and have exposure to distributed computing. Experience working in a Linux environment is a must, while working knowledge in Kubernetes would be considered an added advantage. Basic exposure to ML and MLOps would also be advantageous for this role.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Custom Software Engineer, you will be responsible for developing custom software solutions to design, code, and enhance components across systems or applications. Your role will involve using modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. On a typical day, you will collaborate with cross-functional teams to understand business requirements and work towards aligning the software solutions with project goals. You are expected to be subject matter expert (SME) within the team, make team decisions, and engage in problem-solving activities to contribute to the success of the organization. Additionally, you will be mentoring junior team members to enhance their skills and knowledge. Professional & Technical Skills required for this role include proficiency in Apache Spark, a strong understanding of distributed computing principles and frameworks, experience with data processing and transformation using Apache Spark, familiarity with cloud platforms supporting Apache Spark, and the ability to write efficient and optimized code for data processing tasks. Candidates for this position should have a minimum of 5 years of experience in Apache Spark. This role is based at our Pune office and requires a minimum of 15 years of full-time education.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The Content and Data Analytics team is an integral part of Global Operations at Elsevier, within the DataOps division. The team primarily provides data analysis services using Databricks, catering to product owners and data scientists of Elsevier's Research Data Platform. Your work in this team will directly contribute to the development of cutting-edge data analytics products for the scientific research sector, including renowned products like Scopus and SciVal. As a Data Analyst II, you are expected to possess a foundational understanding of best practices and project execution, with supervision from senior team members. Your responsibilities will include generating basic insights and recommendations within your area of expertise, supporting analytics team members, and gradually taking the lead on low complexity analytics projects. Your role will be situated within DataOps, supporting data scientists working within the Domains of the Research Data Platform. The Domains are functional units responsible for delivering various data products through data science algorithms, presenting you with a diverse range of analytical activities. Tasks may involve delving into extensive datasets to address queries, conducting large-scale data preparation, evaluating data science algorithm metrics, and more. To excel in this role, you must possess a sharp eye for detail, strong analytical skills, and proficiency in at least one data analysis system. Curiosity, dedication to quality work, and an interest in the scientific research realm and Elsevier's products are essential. Effective communication with stakeholders worldwide is crucial, hence a high level of English proficiency is required. Requirements for this position include a minimum of 3 years of work experience, coding proficiency in a programming language (preferably Python) and SQL, familiarity with string manipulation functions like regex, prior exposure to data analysis tools such as Pandas or Apache Spark/Databricks, knowledge of basic statistics relevant to data science, and familiarity with visualization tools like Tableau/Power BI. Furthermore, experience with Agile tools like JIRA is advantageous. Stakeholder management skills are crucial, involving building strong relationships with Data Scientists and Product Managers, aligning activities with their goals, and presenting achievements and project updates effectively. In addition to technical competencies, soft skills like effective collaboration, proactive problem-solving, and a drive for results are highly valued. Key results for this role include understanding task requirements, data gathering and refinement, interpretation of large datasets, reporting findings through effective storytelling, formulating recommendations, and identifying new opportunities. Elsevier promotes a healthy work-life balance with various well-being initiatives, shared parental leave, study assistance, and sabbaticals. The company offers comprehensive health insurance, flexible working arrangements, employee assistance programs, and modern family benefits to support employees" holistic well-being. As a global leader in information and analytics, Elsevier plays a pivotal role in advancing science and healthcare outcomes. Your work with the company contributes to addressing global challenges and fostering a sustainable future through innovative technologies and impactful partnerships. Elsevier is committed to a fair and accessible hiring process. If you require accommodations or adjustments due to a disability or other needs, please notify the company. Furthermore, be cautious of potential scams during your job search and familiarize yourself with the Candidate Privacy Policy for a secure application process. For US job seekers, it's important to know your rights regarding Equal Employment Opportunity laws.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Engineer at our IT Services Organization, you will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Python. Your role will involve designing and implementing Big Data solutions that integrate data from various sources, including RDBMS, NoSQL databases, and cloud services. Additionally, you will lead a team of data engineers to ensure efficient project execution and adherence to best practices. Your key responsibilities will include optimizing Spark jobs for performance and scalability, collaborating with cross-functional teams to gather requirements, and delivering data solutions that meet business needs. You will also be involved in implementing ETL processes and frameworks to facilitate data integration and utilizing cloud data services such as GCP for data storage and processing. Applying Agile methodologies to manage project timelines and deliverables will be an essential part of your role. To excel in this position, you should have proficiency in Pyspark and Apache Spark, along with a strong knowledge of Python for data engineering tasks. Hands-on experience with Google Cloud Platform (GCP) and expertise in designing and optimizing Big Data pipelines are crucial. Leadership skills in data engineering team management, understanding of ETL frameworks and distributed computing, familiarity with cloud-based data services, and experience with Agile delivery are also required. We are looking for candidates with a Bachelor's degree in Computer Science, Information Technology, or a related field. It is essential to stay updated with the latest trends and technologies in Big Data and cloud computing to contribute effectively to our projects. If you are passionate about data engineering and eager to work in a dynamic and innovative environment, we encourage you to apply for this exciting opportunity.,
Posted 1 day ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Senior Engineer at Impetus Technologies, you will play a crucial role in designing, developing, and deploying scalable data processing applications using Java and Big Data technologies. Your responsibilities will include collaborating with cross-functional teams, mentoring junior engineers, and contributing to architectural decisions to enhance system performance and scalability. Your key responsibilities will revolve around designing and maintaining high-performance applications, implementing data ingestion and processing workflows using frameworks like Hadoop and Spark, and optimizing existing applications for improved performance and reliability. You will also be actively involved in mentoring junior engineers, participating in code reviews, and staying updated with the latest technology trends in Java and Big Data. To excel in this role, you should possess a strong proficiency in Java programming language, hands-on experience with Big Data technologies such as Apache Hadoop and Apache Spark, and an understanding of distributed computing concepts. Additionally, you should have experience with data processing frameworks and databases, strong problem-solving skills, and excellent communication and teamwork abilities. In this role, you will collaborate with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. The team environment encourages knowledge sharing, continuous learning, and regular technical workshops to enhance your skills and keep you updated with industry trends. Overall, as a Senior Engineer at Impetus Technologies, you will be responsible for designing and developing scalable Java applications for Big Data processing, ensuring code quality and performance, and troubleshooting and optimizing existing systems to enhance performance and scalability. Qualifications: - Strong proficiency in Java programming language - Hands-on experience with Big Data technologies such as Hadoop, Spark, and Kafka - Understanding of distributed computing concepts - Experience with data processing frameworks and databases - Strong problem-solving skills - Knowledge of version control systems and CI/CD pipelines - Excellent communication and teamwork abilities - Bachelor's or master's degree in Computer Science, Engineering, or related field preferred Experience: 7 to 10 years Job Reference Number: 13131,
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
You should have familiarity with modern storage formats like Parquet and ORC. Your responsibilities will include designing and developing conceptual, logical, and physical data models to support enterprise data initiatives. You will build, maintain, and optimize data models within Databricks Unity Catalog, developing efficient data structures using Delta Lake to optimize performance, scalability, and reusability. Collaboration with data engineers, architects, analysts, and stakeholders is essential to ensure data model alignment with ingestion pipelines and business goals. You will translate business and reporting requirements into a robust data architecture using best practices in data warehousing and Lakehouse design. Additionally, maintaining comprehensive metadata artifacts such as data dictionaries, data lineage, and modeling documentation is crucial. Enforcing and supporting data governance, data quality, and security protocols across data ecosystems will be part of your role. You will continuously evaluate and improve modeling processes. The ideal candidate will have 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices is required. Proficiency in modeling methodologies including Kimball, Inmon, and Data Vault is expected. Hands-on experience with modeling tools like ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Proven experience in Databricks with Unity Catalog and Delta Lake is necessary, along with a strong command of SQL and Apache Spark for querying and transformation. Experience with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are required, with the ability to work in cross-functional agile environments. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are desirable. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) are also advantageous.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our team as a Senior Data Scientist with expertise in Artificial Intelligence (AI) and Machine Learning (ML). The ideal candidate should possess a minimum of 5-7 years of experience in data science, focusing on AI/ML applications. You are expected to have a strong background in various ML algorithms, programming languages such as Python, R, or Scala, and data processing frameworks like Apache Spark. Proficiency in data visualization tools and experience in model deployment using Docker, Kubernetes, and cloud services will be essential for this role. Your responsibilities will include end-to-end AI/ML project delivery, from data processing to model deployment. You should have a good understanding of statistics, probability, and mathematical concepts used in AI/ML. Additionally, familiarity with big data tools, natural language processing techniques, time-series analysis, and MLOps will be advantageous. As a Senior Data Scientist, you are expected to lead cross-functional project teams and manage data science projects in a production setting. Your problem-solving skills, communication skills, and curiosity to stay updated with the latest advancements in AI and ML are crucial for success in this role. You should be able to convey technical insights clearly to diverse audiences and quickly adapt to new technologies. If you are an innovative, analytical, and collaborative team player with a proven track record in AI/ML project delivery, we invite you to apply for this exciting opportunity.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
Genpact (NYSE: G) is a global professional services and solutions firm committed to delivering outcomes that shape the future. With over 125,000 employees spread across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and the aspiration to create lasting value for our clients. Driven by our purpose - the relentless pursuit of a world that works better for people - we cater to and transform leading enterprises, including the Fortune Global 500, leveraging our profound business and industry expertise, digital operations services, and proficiency in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Data Bricks Senior Engineer! As a Lead Consultant-Data Bricks Senior Engineer, your responsibilities will include working closely with Software Designers to ensure adherence to best practices, providing suggestions for enhancing code proficiency and maintainability, occasional customer interaction to analyze user needs and determine technical requirements, designing, building, and maintaining scalable and reliable data pipelines using DataBricks, developing high-quality code focusing on performance, scalability, and security, collaborating with cross-functional teams to comprehend data requirements and deliver solutions aligning with business needs, implementing data transformations and intricate algorithms within the DataBricks environment, optimizing data processing and refining data architecture to enhance system efficiency and data quality, mentoring junior engineers, and contributing to the establishment of best practices within the team. Additionally, staying updated with emerging trends and technologies in data engineering and cloud computing is imperative. Qualifications we are looking for: Minimum Qualifications: - Experience in data engineering or a related field - Strong hands-on experience with DataBricks, encompassing development of code, pipelines, and data transformations - Proficiency in at least one programming language (e.g., Python, Scala, Java) - In-depth knowledge of Apache Spark and its integration within DataBricks - Experience with cloud services (AWS, Azure, or GCP) and their data-related products - Familiarity with CI/CD practices, version control (Git), and automated testing - Exceptional problem-solving abilities with the capacity to work both independently and as part of a team - Bachelor's degree in computer science, Engineering, Mathematics, or a related technical field If you are enthusiastic about leveraging your skills and expertise as a Lead Consultant-Data Bricks Senior Engineer, join us at Genpact and be a part of shaping a better future for all. Location: India-Kolkata Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Jul 30, 2024, 5:05:42 AM Unposting Date: Jan 25, 2025, 11:35:42 PM,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The Lead QA Engineer is responsible for ensuring the quality and functionality of Big Data systems through automation and rigorous testing. You will lead a team, develop and execute automated test scripts, and work with cross-functional teams to ensure that testing strategies are aligned with product goals. Expertise in Python, Pytest, SQL, Apache Spark, and cloud platforms is crucial for maintaining data integrity and quality across all environments. Key Responsibilities: Test Automation & Framework Development: - Develop and automate processes for gathering expected results from data sources and comparing them with actual testing outcomes. - Own the test automation framework, ensuring its scalability and robustness across projects. - Write, execute, and maintain automated tests using industry-standard tools and frameworks. - Build and automate tests for relational, flat files, XML, NoSQL, cloud, and Big Data sources. Test Suite Maintenance & Execution: - Assist in the development and maintenance of smoke, performance, functional, and regression tests to ensure code functionality. - Lead the test automation efforts, particularly for Big Data and cloud environments. - Set up data, tools, and databases necessary for automating the testing process. - Work with development teams to adapt test scripts as needed when software changes occur. Big Data & ETL Testing: - Execute automated Big Data testing tasks such as performance testing, security testing, migration testing, architecture testing, and visualization testing. - Perform data validation, process validation, outcome validation, and code coverage testing for Big Data projects. - Automate the testing process using ETL Validator tools and test setups for big data, including Apache Spark environments. Team Leadership & Collaboration: - Lead a team of QA engineers (minimum 3 members), mentoring them to ensure consistent testing quality. - Collaborate with cross-functional teams in a CI/CD environment to integrate testing seamlessly into the deployment pipeline. - Report and communicate testing progress, issues, and insights to the Scrum Master and stakeholders. CI/CD Pipeline & Monitoring: - Develop automated tests in CI/CD environments, ensuring smooth and reliable deployments. - Utilize monitoring tools such as New Relic and Grafana to track system performance and identify potential issues. - Ensure timely completion of testing tasks and drive improvements in automation coverage. Skills & Experience: Leadership: - 6+ years of technical QA experience, with at least 2 years focused on automation testing. - Experience leading a QA team with a focus on Big Data environments. Automation & Testing Tools: - Strong experience with Python, Pytest, or Robot Framework for automated test creation. - Experience with BDD frameworks like Cucumber or SpecFlow. - Strong SQL skills, particularly for working with large-scale datasets and cloud platforms. Big Data Expertise: - Hands-on experience in Big Data testing, particularly with Apache Spark. - Knowledge of data testing strategies such as data validation, process validation, and code coverage. - Experience automating ETL/ELT validation tasks and executing various Big Data testing tasks (e.g., performance, migration, security). Cloud & CI/CD: - Proficiency with Big Data cloud platforms, and experience in CI/CD environments. - Hands-on experience with monitoring tools like New Relic, Grafana, etc. Behavioral Fit: - Highly technical with a keen eye for detail. - Driven, self-motivated, and results-oriented. - Confident, with the ability to challenge assumptions where necessary. - Structured, organized, and capable of multitasking across multiple projects. - Capable of working independently as well as in cross-functional, multicultural teams. Key Performance Indicators (KPIs): - Timely completion of testing tasks within specified timeframes. - Automation and regression testing coverage, with quarterly improvement goals. - Clear and consistent reporting of issues to the Scrum Master and relevant stakeholders. - Ownership of the testing lifecycle, from planning to execution. - Quality and consistency of data across the entire data landscape. - Accurate and well-maintained documentation. Education & Certifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Certifications in QA, Big Data, or related technologies are a plus. Job Type: Full-time Benefits: Provident Fund, Work from home Schedule: Day shift, Performance bonus Experience: total work: 6 years (Preferred), QA Lead: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
west bengal
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. We are counting on your unique voice and perspective to help EY become even better. Join us and build an exceptional experience for yourself, and a better working world for all. We are seeking a highly skilled and motivated Data Analyst with experience in ETL services to join our dynamic team. As a Data Analyst, you will be responsible for data requirement gathering, preparing data requirement artefacts, data integration strategies, data quality, data cleansing, optimizing data pipelines, and solutions that support business intelligence, analytics, and large-scale data processing. You will collaborate closely with data engineering teams to ensure seamless data flow across our systems. The role requires hands-on experience in the Financial Services domain with solid Data Management, Python, SQL & Advanced SQL development skills. You should have the ability to interact with data stakeholders and source teams to gather data requirements, understand, analyze, and interpret large datasets, prepare data dictionaries, source to target mapping, reporting requirements, and develop advanced programs for data extraction and analysis. Key Responsibilities: - Interact with data stakeholders and source teams to gather data requirements - Understand, analyze, and interpret large datasets - Prepare data dictionaries, source to target mapping, and reporting requirements - Develop advanced programs for data extraction and preparation - Discover, design, and develop analytical methods to support data processing - Perform data profiling manually or using profiling tools - Identify critical data elements and PII handling process/mandates - Collaborate with technology team to develop analytical models and validate results - Interface and communicate with onsite teams directly to understand requirements - Provide technical solutions as per business needs and best practices Required Skills and Qualifications: - BE/BTech/MTech/MCA with 3-7 years of industry experience in data analysis and management - Experience in finance data domains - Strong Python programming and data analysis skills - Strong advance SQL/PL SQL programming experience - In-depth experience in data management, data integration, ETL, data modeling, data mapping, data profiling, data quality, reporting, and testing Good To have: - Experience using Agile methodologies - Experience using cloud technologies such as AWS or Azure - Experience in Kafka, Apache Spark using SparkSQL and Spark Streaming or Apache Storm Other Key capabilities: - Client facing skills and proven ability in effective planning, executing, and problem-solving - Excellent communication, interpersonal, and teamworking skills - Multi-tasking attitude, flexible with ability to change priorities quickly - Methodical approach, logical thinking, and ability to plan work and meet deadlines - Accuracy and attention to details - Written and verbal communication skills - Willingness to travel to meet client needs - Ability to plan resource requirements from high-level specifications - Ability to quickly understand and learn new technology/features and inspire change within the team and client organization EY exists to build a better working world, helping to create long-term value for clients, people, and society, and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate across assurance, consulting, law, strategy, tax, and transactions. EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
The AM3 Group is looking for a highly skilled Senior Java Developer with a strong background in AWS cloud services to be a part of our dynamic team. In this role, you will have the opportunity to create and manage modern, scalable, and cloud-native applications using Java (up to Java 17), Spring Boot, Angular, and a comprehensive range of AWS tools. As a Senior Java Developer at AM3 Group, your responsibilities will include developing full-stack applications utilizing Java, Spring Boot, Angular, and RESTful APIs. You will be involved in building and deploying cloud-native solutions with AWS services such as EC2, S3, Lambda, RDS, DynamoDB, and API Gateway. Additionally, you will be tasked with designing and implementing microservices architectures for enhanced scalability and resilience. Your role will also entail creating and maintaining CI/CD pipelines using tools like Jenkins, GitHub Actions, AWS CodePipeline, and Terraform, as well as containerizing applications with Docker and managing them through Kubernetes (EKS). Monitoring and optimizing performance using AWS CloudWatch, X-Ray, and the ELK Stack, working with Apache Kafka and Redis for real-time event-driven systems, and conducting unit/integration testing with JUnit, Mockito, Jasmine, and API testing via Postman are also key aspects of the role. Collaboration within Agile/Scrum teams to deliver features in iterative sprints is an essential part of your responsibilities. The ideal candidate should possess a minimum of 8 years of Java development experience with a strong understanding of Java 8/11/17, expertise in Spring Boot, Hibernate, and microservices, as well as solid experience with AWS including infrastructure and serverless (Lambda, EC2, S3, etc.). Frontend development exposure with Angular (v212), JavaScript, and Bootstrap, hands-on experience with CI/CD, GitHub Actions, Jenkins, and Terraform, familiarity with SQL (MySQL, Oracle) and NoSQL (DynamoDB, MongoDB), and knowledge of SQS, JMS, and event-driven architecture are required skills. Additionally, familiarity with DevSecOps and cloud security best practices is essential. Preferred qualifications include experience with serverless frameworks (AWS Lambda), familiarity with React.js, Node.js, or Kotlin, and exposure to Big Data, Apache Spark, or machine learning pipelines. Join our team at AM3 Group to work on challenging and high-impact cloud projects, benefit from competitive compensation and benefits, enjoy a flexible work environment, be part of a culture of innovation and continuous learning, and gain global exposure through cross-functional collaboration. Apply now to be a part of a future-ready team that is shaping cloud-native enterprise solutions! For any questions or referrals, please contact us at careers@am3group.com. To learn more about us, visit our website at https://am3group.com/.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, you will be part of a team of innovative professionals working with cutting-edge technologies. Our purpose is anchored in bringing real positive changes in an increasingly virtual world, transcending generational gaps and future disruptions. We are currently seeking SQL Professionals for the role of Data Engineer with 4-6 years of experience. The ideal candidate must have a strong academic background. As a Data Engineer at BNY Mellon in Pune, you will be responsible for designing, developing, and maintaining scalable data pipelines and ETL processes using Apache Spark and SQL. You will collaborate with data scientists and analysts to understand data requirements, optimize and query large datasets, ensure data quality and integrity, implement data governance and security best practices, participate in code reviews, and troubleshoot data-related issues promptly. Qualifications for this role include 4-6 years of experience in data engineering, proficiency in SQL and data processing frameworks like Apache Spark, knowledge of database technologies such as SQL Server or Oracle, experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data warehousing solutions, understanding of Python, Scala, or Java for data manipulation, excellent analytical and problem-solving skills, and good communication skills to work effectively in a team environment. Joining YASH means being empowered to shape your career in an inclusive team environment. We offer career-oriented skilling models and promote continuous learning, unlearning, and relearning at a rapid pace. Our workplace is based on four principles: flexible work arrangements, free spirit, and emotional positivity; agile self-determination, trust, transparency, and open collaboration; all support needed for the realization of business goals; and stable employment with a great atmosphere and ethical corporate culture.,
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough