Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Quality Engineer, your primary responsibility will be to analyze business and technical requirements to design, develop, and execute comprehensive test plans for ETL pipelines and data transformations. You will perform data validation, reconciliation, and integrity checks across various data sources and target systems. Additionally, you will be expected to build and automate data quality checks using SQL and/or Python scripting. It will be your duty to identify, document, and track data quality issues, anomalies, and defects. Collaboration is key in this role, as you will work closely with data engineers, developers, QA, and business stakeholders to understand data requirements and ensure that data quality standards are met. You will define data quality KPIs and implement continuous monitoring frameworks. Participation in data model reviews and providing input on data quality considerations will also be part of your responsibilities. In case of data discrepancies, you will be expected to perform root cause analysis and work with teams to drive resolution. Ensuring alignment to data governance policies, standards, and best practices will also fall under your purview. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field. Additionally, you should have 4 to 7 years of experience as a Data Quality Engineer, ETL Tester, or a similar role. A strong understanding of ETL concepts, data warehousing principles, and relational database design is essential. Proficiency in SQL for complex querying, data profiling, and validation tasks is required. Familiarity with data quality tools, testing methodologies, and modern cloud data ecosystems (AWS, Snowflake, Apache Spark, Redshift) will be advantageous. Moreover, advanced knowledge of SQL, data pipeline tools like Airflow, DBT, or Informatica, as well as experience with integrating data validation processes into CI/CD pipelines using tools like GitHub Actions, Jenkins, or similar, are desired qualifications. An understanding of big data platforms, data lakes, non-relational databases, data lineage, master data management (MDM) concepts, and experience with Agile/Scrum development methodologies will be beneficial for excelling in this role. Your excellent analytical and problem-solving skills along with a strong attention to detail will be valuable assets in fulfilling the responsibilities of a Data Quality Engineer.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking a Senior Data Engineer who is proficient in Azure Databricks, PySpark, and distributed computing to create and enhance scalable ETL pipelines specifically for manufacturing analytics. Your responsibilities will include working with industrial data to support real-time and batch data processing needs. Your role will involve constructing scalable real-time and batch processing workflows utilizing Azure Databricks, PySpark, and Apache Spark. You will be responsible for data pre-processing tasks such as cleaning, transformation, deduplication, normalization, encoding, and scaling to guarantee high-quality input for downstream analytics. Designing and managing cloud-based data architectures, like data lakes, lakehouses, and warehouses, following the Medallion Architecture, will also be part of your duties. You will be expected to deploy and optimize data solutions on Azure, AWS, or GCP, focusing on performance, security, and scalability. Developing and optimizing ETL/ELT pipelines for structured and unstructured data sourced from IoT, MES, SCADA, LIMS, and ERP systems and automating data workflows using CI/CD and DevOps best practices for security and compliance will also be essential. Monitoring, troubleshooting, and enhancing data pipelines for high availability and reliability, as well as utilizing Docker and Kubernetes for scalable data processing, will be key aspects of your role. Collaboration with automation teams will also be required for effective project delivery. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Technology, or a related field, with a specific requirement for IIT Graduates. You should possess at least 4 years of experience in data engineering with a focus on cloud platforms like Azure, AWS, or GCP. Proficiency in PySpark, Azure Databricks, Python, Apache Spark, and expertise in various databases (relational, time series, and NoSQL) is necessary. Experience in containerization tools like Docker and Kubernetes, strong analytical and problem-solving skills, familiarity with MLOps and DevOps practices, excellent communication and collaboration abilities, and the flexibility to adapt to a dynamic startup environment are desirable qualities for this role.,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Technology Lead Analyst role at our organization involves working closely with the Technology team to establish and implement new or updated application systems and programs. Your primary responsibility will be to lead applications systems analysis and programming activities. As the Applications Development Technology Lead Analyst, you will collaborate with various management teams to ensure seamless integration of functions to achieve organizational goals. You will also be responsible for identifying necessary system enhancements for deploying new products and process improvements. Additionally, you will play a key role in resolving high-impact problems and projects by evaluating complex business processes and industry standards. Your expertise in applications programming will be crucial in ensuring that application design aligns with the overall architecture blueprint. You will need to have a deep understanding of system flow and develop coding, testing, debugging, and implementation standards. Furthermore, you will be expected to have a comprehensive knowledge of how different business areas integrate to achieve business objectives. In this position, you will provide in-depth analysis and innovative solutions to address issues effectively. You will also serve as an advisor or coach to mid-level developers and analysts, assigning work as needed. It is essential to assess risks carefully when making business decisions, with a focus on upholding the firm's reputation and complying with relevant laws and regulations. To qualify for this role, you should have 6-10 years of relevant experience in Apps Development or systems analysis. You must also possess extensive experience in system analysis and software application programming, along with a track record of managing and implementing successful projects. Being a Subject Matter Expert (SME) in at least one area of Applications Development will be advantageous. A Bachelor's degree or equivalent experience is required, while a Master's degree is preferred. The ability to adjust priorities swiftly, demonstrated leadership and project management skills, and clear written and verbal communication are also essential qualifications for this position. The job description provides an overview of the typical responsibilities associated with this role. As a Vice President (VP) in this capacity, you will lead a specific technical vertical (Frontend, Backend, or Data), mentor developers, and ensure timely, scalable, and testable delivery within your domain. Your responsibilities will include leading a team of engineers, translating architecture into execution, reviewing complex components, and driving data platform migration projects. Additionally, you will be expected to evaluate and implement AI-based tools for enhanced productivity, testing, and code improvement. The required skills for this role include having 10-14 years of experience in leading development teams, delivering cloud-native solutions, and proficiency in programming languages such as Java, Python, and JavaScript/TypeScript. Familiarity with frameworks like Spring Boot/WebFlux, Angular, Node.js, databases including Oracle and MongoDB, cloud technologies such as ECS, S3, Lambda, and Kubernetes, as well as data technologies like Apache Spark and Snowflake, are also essential. Strong mentoring, conflict resolution, and cross-team communication skills are important attributes for success in this position.,
Posted 3 weeks ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
As an ideal candidate for this role, you should possess in-depth knowledge of Python and have good experience in creating APIs using FastAPI. You should also have exposure to data libraries like Pandas, DataFrame, NumPy, as well as knowledge in Apache open-source components and Apache Spark. Familiarity with Lakehouse architecture and Open table formats is also desirable. Additionally, you should be well-versed in automated unit testing, preferably using PyTest, and have exposure to distributed computing. Experience working in a Linux environment is a must, while working knowledge in Kubernetes would be considered an added advantage. Basic exposure to ML and MLOps would also be advantageous for this role.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Custom Software Engineer, you will be responsible for developing custom software solutions to design, code, and enhance components across systems or applications. Your role will involve using modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. On a typical day, you will collaborate with cross-functional teams to understand business requirements and work towards aligning the software solutions with project goals. You are expected to be subject matter expert (SME) within the team, make team decisions, and engage in problem-solving activities to contribute to the success of the organization. Additionally, you will be mentoring junior team members to enhance their skills and knowledge. Professional & Technical Skills required for this role include proficiency in Apache Spark, a strong understanding of distributed computing principles and frameworks, experience with data processing and transformation using Apache Spark, familiarity with cloud platforms supporting Apache Spark, and the ability to write efficient and optimized code for data processing tasks. Candidates for this position should have a minimum of 5 years of experience in Apache Spark. This role is based at our Pune office and requires a minimum of 15 years of full-time education.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The Content and Data Analytics team is an integral part of Global Operations at Elsevier, within the DataOps division. The team primarily provides data analysis services using Databricks, catering to product owners and data scientists of Elsevier's Research Data Platform. Your work in this team will directly contribute to the development of cutting-edge data analytics products for the scientific research sector, including renowned products like Scopus and SciVal. As a Data Analyst II, you are expected to possess a foundational understanding of best practices and project execution, with supervision from senior team members. Your responsibilities will include generating basic insights and recommendations within your area of expertise, supporting analytics team members, and gradually taking the lead on low complexity analytics projects. Your role will be situated within DataOps, supporting data scientists working within the Domains of the Research Data Platform. The Domains are functional units responsible for delivering various data products through data science algorithms, presenting you with a diverse range of analytical activities. Tasks may involve delving into extensive datasets to address queries, conducting large-scale data preparation, evaluating data science algorithm metrics, and more. To excel in this role, you must possess a sharp eye for detail, strong analytical skills, and proficiency in at least one data analysis system. Curiosity, dedication to quality work, and an interest in the scientific research realm and Elsevier's products are essential. Effective communication with stakeholders worldwide is crucial, hence a high level of English proficiency is required. Requirements for this position include a minimum of 3 years of work experience, coding proficiency in a programming language (preferably Python) and SQL, familiarity with string manipulation functions like regex, prior exposure to data analysis tools such as Pandas or Apache Spark/Databricks, knowledge of basic statistics relevant to data science, and familiarity with visualization tools like Tableau/Power BI. Furthermore, experience with Agile tools like JIRA is advantageous. Stakeholder management skills are crucial, involving building strong relationships with Data Scientists and Product Managers, aligning activities with their goals, and presenting achievements and project updates effectively. In addition to technical competencies, soft skills like effective collaboration, proactive problem-solving, and a drive for results are highly valued. Key results for this role include understanding task requirements, data gathering and refinement, interpretation of large datasets, reporting findings through effective storytelling, formulating recommendations, and identifying new opportunities. Elsevier promotes a healthy work-life balance with various well-being initiatives, shared parental leave, study assistance, and sabbaticals. The company offers comprehensive health insurance, flexible working arrangements, employee assistance programs, and modern family benefits to support employees" holistic well-being. As a global leader in information and analytics, Elsevier plays a pivotal role in advancing science and healthcare outcomes. Your work with the company contributes to addressing global challenges and fostering a sustainable future through innovative technologies and impactful partnerships. Elsevier is committed to a fair and accessible hiring process. If you require accommodations or adjustments due to a disability or other needs, please notify the company. Furthermore, be cautious of potential scams during your job search and familiarize yourself with the Candidate Privacy Policy for a secure application process. For US job seekers, it's important to know your rights regarding Equal Employment Opportunity laws.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Engineer at our IT Services Organization, you will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Python. Your role will involve designing and implementing Big Data solutions that integrate data from various sources, including RDBMS, NoSQL databases, and cloud services. Additionally, you will lead a team of data engineers to ensure efficient project execution and adherence to best practices. Your key responsibilities will include optimizing Spark jobs for performance and scalability, collaborating with cross-functional teams to gather requirements, and delivering data solutions that meet business needs. You will also be involved in implementing ETL processes and frameworks to facilitate data integration and utilizing cloud data services such as GCP for data storage and processing. Applying Agile methodologies to manage project timelines and deliverables will be an essential part of your role. To excel in this position, you should have proficiency in Pyspark and Apache Spark, along with a strong knowledge of Python for data engineering tasks. Hands-on experience with Google Cloud Platform (GCP) and expertise in designing and optimizing Big Data pipelines are crucial. Leadership skills in data engineering team management, understanding of ETL frameworks and distributed computing, familiarity with cloud-based data services, and experience with Agile delivery are also required. We are looking for candidates with a Bachelor's degree in Computer Science, Information Technology, or a related field. It is essential to stay updated with the latest trends and technologies in Big Data and cloud computing to contribute effectively to our projects. If you are passionate about data engineering and eager to work in a dynamic and innovative environment, we encourage you to apply for this exciting opportunity.,
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Senior Engineer at Impetus Technologies, you will play a crucial role in designing, developing, and deploying scalable data processing applications using Java and Big Data technologies. Your responsibilities will include collaborating with cross-functional teams, mentoring junior engineers, and contributing to architectural decisions to enhance system performance and scalability. Your key responsibilities will revolve around designing and maintaining high-performance applications, implementing data ingestion and processing workflows using frameworks like Hadoop and Spark, and optimizing existing applications for improved performance and reliability. You will also be actively involved in mentoring junior engineers, participating in code reviews, and staying updated with the latest technology trends in Java and Big Data. To excel in this role, you should possess a strong proficiency in Java programming language, hands-on experience with Big Data technologies such as Apache Hadoop and Apache Spark, and an understanding of distributed computing concepts. Additionally, you should have experience with data processing frameworks and databases, strong problem-solving skills, and excellent communication and teamwork abilities. In this role, you will collaborate with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. The team environment encourages knowledge sharing, continuous learning, and regular technical workshops to enhance your skills and keep you updated with industry trends. Overall, as a Senior Engineer at Impetus Technologies, you will be responsible for designing and developing scalable Java applications for Big Data processing, ensuring code quality and performance, and troubleshooting and optimizing existing systems to enhance performance and scalability. Qualifications: - Strong proficiency in Java programming language - Hands-on experience with Big Data technologies such as Hadoop, Spark, and Kafka - Understanding of distributed computing concepts - Experience with data processing frameworks and databases - Strong problem-solving skills - Knowledge of version control systems and CI/CD pipelines - Excellent communication and teamwork abilities - Bachelor's or master's degree in Computer Science, Engineering, or related field preferred Experience: 7 to 10 years Job Reference Number: 13131,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
You should have familiarity with modern storage formats like Parquet and ORC. Your responsibilities will include designing and developing conceptual, logical, and physical data models to support enterprise data initiatives. You will build, maintain, and optimize data models within Databricks Unity Catalog, developing efficient data structures using Delta Lake to optimize performance, scalability, and reusability. Collaboration with data engineers, architects, analysts, and stakeholders is essential to ensure data model alignment with ingestion pipelines and business goals. You will translate business and reporting requirements into a robust data architecture using best practices in data warehousing and Lakehouse design. Additionally, maintaining comprehensive metadata artifacts such as data dictionaries, data lineage, and modeling documentation is crucial. Enforcing and supporting data governance, data quality, and security protocols across data ecosystems will be part of your role. You will continuously evaluate and improve modeling processes. The ideal candidate will have 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices is required. Proficiency in modeling methodologies including Kimball, Inmon, and Data Vault is expected. Hands-on experience with modeling tools like ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Proven experience in Databricks with Unity Catalog and Delta Lake is necessary, along with a strong command of SQL and Apache Spark for querying and transformation. Experience with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are required, with the ability to work in cross-functional agile environments. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are desirable. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) are also advantageous.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our team as a Senior Data Scientist with expertise in Artificial Intelligence (AI) and Machine Learning (ML). The ideal candidate should possess a minimum of 5-7 years of experience in data science, focusing on AI/ML applications. You are expected to have a strong background in various ML algorithms, programming languages such as Python, R, or Scala, and data processing frameworks like Apache Spark. Proficiency in data visualization tools and experience in model deployment using Docker, Kubernetes, and cloud services will be essential for this role. Your responsibilities will include end-to-end AI/ML project delivery, from data processing to model deployment. You should have a good understanding of statistics, probability, and mathematical concepts used in AI/ML. Additionally, familiarity with big data tools, natural language processing techniques, time-series analysis, and MLOps will be advantageous. As a Senior Data Scientist, you are expected to lead cross-functional project teams and manage data science projects in a production setting. Your problem-solving skills, communication skills, and curiosity to stay updated with the latest advancements in AI and ML are crucial for success in this role. You should be able to convey technical insights clearly to diverse audiences and quickly adapt to new technologies. If you are an innovative, analytical, and collaborative team player with a proven track record in AI/ML project delivery, we invite you to apply for this exciting opportunity.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
Genpact (NYSE: G) is a global professional services and solutions firm committed to delivering outcomes that shape the future. With over 125,000 employees spread across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and the aspiration to create lasting value for our clients. Driven by our purpose - the relentless pursuit of a world that works better for people - we cater to and transform leading enterprises, including the Fortune Global 500, leveraging our profound business and industry expertise, digital operations services, and proficiency in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Data Bricks Senior Engineer! As a Lead Consultant-Data Bricks Senior Engineer, your responsibilities will include working closely with Software Designers to ensure adherence to best practices, providing suggestions for enhancing code proficiency and maintainability, occasional customer interaction to analyze user needs and determine technical requirements, designing, building, and maintaining scalable and reliable data pipelines using DataBricks, developing high-quality code focusing on performance, scalability, and security, collaborating with cross-functional teams to comprehend data requirements and deliver solutions aligning with business needs, implementing data transformations and intricate algorithms within the DataBricks environment, optimizing data processing and refining data architecture to enhance system efficiency and data quality, mentoring junior engineers, and contributing to the establishment of best practices within the team. Additionally, staying updated with emerging trends and technologies in data engineering and cloud computing is imperative. Qualifications we are looking for: Minimum Qualifications: - Experience in data engineering or a related field - Strong hands-on experience with DataBricks, encompassing development of code, pipelines, and data transformations - Proficiency in at least one programming language (e.g., Python, Scala, Java) - In-depth knowledge of Apache Spark and its integration within DataBricks - Experience with cloud services (AWS, Azure, or GCP) and their data-related products - Familiarity with CI/CD practices, version control (Git), and automated testing - Exceptional problem-solving abilities with the capacity to work both independently and as part of a team - Bachelor's degree in computer science, Engineering, Mathematics, or a related technical field If you are enthusiastic about leveraging your skills and expertise as a Lead Consultant-Data Bricks Senior Engineer, join us at Genpact and be a part of shaping a better future for all. Location: India-Kolkata Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Jul 30, 2024, 5:05:42 AM Unposting Date: Jan 25, 2025, 11:35:42 PM,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The Lead QA Engineer is responsible for ensuring the quality and functionality of Big Data systems through automation and rigorous testing. You will lead a team, develop and execute automated test scripts, and work with cross-functional teams to ensure that testing strategies are aligned with product goals. Expertise in Python, Pytest, SQL, Apache Spark, and cloud platforms is crucial for maintaining data integrity and quality across all environments. Key Responsibilities: Test Automation & Framework Development: - Develop and automate processes for gathering expected results from data sources and comparing them with actual testing outcomes. - Own the test automation framework, ensuring its scalability and robustness across projects. - Write, execute, and maintain automated tests using industry-standard tools and frameworks. - Build and automate tests for relational, flat files, XML, NoSQL, cloud, and Big Data sources. Test Suite Maintenance & Execution: - Assist in the development and maintenance of smoke, performance, functional, and regression tests to ensure code functionality. - Lead the test automation efforts, particularly for Big Data and cloud environments. - Set up data, tools, and databases necessary for automating the testing process. - Work with development teams to adapt test scripts as needed when software changes occur. Big Data & ETL Testing: - Execute automated Big Data testing tasks such as performance testing, security testing, migration testing, architecture testing, and visualization testing. - Perform data validation, process validation, outcome validation, and code coverage testing for Big Data projects. - Automate the testing process using ETL Validator tools and test setups for big data, including Apache Spark environments. Team Leadership & Collaboration: - Lead a team of QA engineers (minimum 3 members), mentoring them to ensure consistent testing quality. - Collaborate with cross-functional teams in a CI/CD environment to integrate testing seamlessly into the deployment pipeline. - Report and communicate testing progress, issues, and insights to the Scrum Master and stakeholders. CI/CD Pipeline & Monitoring: - Develop automated tests in CI/CD environments, ensuring smooth and reliable deployments. - Utilize monitoring tools such as New Relic and Grafana to track system performance and identify potential issues. - Ensure timely completion of testing tasks and drive improvements in automation coverage. Skills & Experience: Leadership: - 6+ years of technical QA experience, with at least 2 years focused on automation testing. - Experience leading a QA team with a focus on Big Data environments. Automation & Testing Tools: - Strong experience with Python, Pytest, or Robot Framework for automated test creation. - Experience with BDD frameworks like Cucumber or SpecFlow. - Strong SQL skills, particularly for working with large-scale datasets and cloud platforms. Big Data Expertise: - Hands-on experience in Big Data testing, particularly with Apache Spark. - Knowledge of data testing strategies such as data validation, process validation, and code coverage. - Experience automating ETL/ELT validation tasks and executing various Big Data testing tasks (e.g., performance, migration, security). Cloud & CI/CD: - Proficiency with Big Data cloud platforms, and experience in CI/CD environments. - Hands-on experience with monitoring tools like New Relic, Grafana, etc. Behavioral Fit: - Highly technical with a keen eye for detail. - Driven, self-motivated, and results-oriented. - Confident, with the ability to challenge assumptions where necessary. - Structured, organized, and capable of multitasking across multiple projects. - Capable of working independently as well as in cross-functional, multicultural teams. Key Performance Indicators (KPIs): - Timely completion of testing tasks within specified timeframes. - Automation and regression testing coverage, with quarterly improvement goals. - Clear and consistent reporting of issues to the Scrum Master and relevant stakeholders. - Ownership of the testing lifecycle, from planning to execution. - Quality and consistency of data across the entire data landscape. - Accurate and well-maintained documentation. Education & Certifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Certifications in QA, Big Data, or related technologies are a plus. Job Type: Full-time Benefits: Provident Fund, Work from home Schedule: Day shift, Performance bonus Experience: total work: 6 years (Preferred), QA Lead: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
west bengal
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. We are counting on your unique voice and perspective to help EY become even better. Join us and build an exceptional experience for yourself, and a better working world for all. We are seeking a highly skilled and motivated Data Analyst with experience in ETL services to join our dynamic team. As a Data Analyst, you will be responsible for data requirement gathering, preparing data requirement artefacts, data integration strategies, data quality, data cleansing, optimizing data pipelines, and solutions that support business intelligence, analytics, and large-scale data processing. You will collaborate closely with data engineering teams to ensure seamless data flow across our systems. The role requires hands-on experience in the Financial Services domain with solid Data Management, Python, SQL & Advanced SQL development skills. You should have the ability to interact with data stakeholders and source teams to gather data requirements, understand, analyze, and interpret large datasets, prepare data dictionaries, source to target mapping, reporting requirements, and develop advanced programs for data extraction and analysis. Key Responsibilities: - Interact with data stakeholders and source teams to gather data requirements - Understand, analyze, and interpret large datasets - Prepare data dictionaries, source to target mapping, and reporting requirements - Develop advanced programs for data extraction and preparation - Discover, design, and develop analytical methods to support data processing - Perform data profiling manually or using profiling tools - Identify critical data elements and PII handling process/mandates - Collaborate with technology team to develop analytical models and validate results - Interface and communicate with onsite teams directly to understand requirements - Provide technical solutions as per business needs and best practices Required Skills and Qualifications: - BE/BTech/MTech/MCA with 3-7 years of industry experience in data analysis and management - Experience in finance data domains - Strong Python programming and data analysis skills - Strong advance SQL/PL SQL programming experience - In-depth experience in data management, data integration, ETL, data modeling, data mapping, data profiling, data quality, reporting, and testing Good To have: - Experience using Agile methodologies - Experience using cloud technologies such as AWS or Azure - Experience in Kafka, Apache Spark using SparkSQL and Spark Streaming or Apache Storm Other Key capabilities: - Client facing skills and proven ability in effective planning, executing, and problem-solving - Excellent communication, interpersonal, and teamworking skills - Multi-tasking attitude, flexible with ability to change priorities quickly - Methodical approach, logical thinking, and ability to plan work and meet deadlines - Accuracy and attention to details - Written and verbal communication skills - Willingness to travel to meet client needs - Ability to plan resource requirements from high-level specifications - Ability to quickly understand and learn new technology/features and inspire change within the team and client organization EY exists to build a better working world, helping to create long-term value for clients, people, and society, and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate across assurance, consulting, law, strategy, tax, and transactions. EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
The AM3 Group is looking for a highly skilled Senior Java Developer with a strong background in AWS cloud services to be a part of our dynamic team. In this role, you will have the opportunity to create and manage modern, scalable, and cloud-native applications using Java (up to Java 17), Spring Boot, Angular, and a comprehensive range of AWS tools. As a Senior Java Developer at AM3 Group, your responsibilities will include developing full-stack applications utilizing Java, Spring Boot, Angular, and RESTful APIs. You will be involved in building and deploying cloud-native solutions with AWS services such as EC2, S3, Lambda, RDS, DynamoDB, and API Gateway. Additionally, you will be tasked with designing and implementing microservices architectures for enhanced scalability and resilience. Your role will also entail creating and maintaining CI/CD pipelines using tools like Jenkins, GitHub Actions, AWS CodePipeline, and Terraform, as well as containerizing applications with Docker and managing them through Kubernetes (EKS). Monitoring and optimizing performance using AWS CloudWatch, X-Ray, and the ELK Stack, working with Apache Kafka and Redis for real-time event-driven systems, and conducting unit/integration testing with JUnit, Mockito, Jasmine, and API testing via Postman are also key aspects of the role. Collaboration within Agile/Scrum teams to deliver features in iterative sprints is an essential part of your responsibilities. The ideal candidate should possess a minimum of 8 years of Java development experience with a strong understanding of Java 8/11/17, expertise in Spring Boot, Hibernate, and microservices, as well as solid experience with AWS including infrastructure and serverless (Lambda, EC2, S3, etc.). Frontend development exposure with Angular (v212), JavaScript, and Bootstrap, hands-on experience with CI/CD, GitHub Actions, Jenkins, and Terraform, familiarity with SQL (MySQL, Oracle) and NoSQL (DynamoDB, MongoDB), and knowledge of SQS, JMS, and event-driven architecture are required skills. Additionally, familiarity with DevSecOps and cloud security best practices is essential. Preferred qualifications include experience with serverless frameworks (AWS Lambda), familiarity with React.js, Node.js, or Kotlin, and exposure to Big Data, Apache Spark, or machine learning pipelines. Join our team at AM3 Group to work on challenging and high-impact cloud projects, benefit from competitive compensation and benefits, enjoy a flexible work environment, be part of a culture of innovation and continuous learning, and gain global exposure through cross-functional collaboration. Apply now to be a part of a future-ready team that is shaping cloud-native enterprise solutions! For any questions or referrals, please contact us at careers@am3group.com. To learn more about us, visit our website at https://am3group.com/.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, you will be part of a team of innovative professionals working with cutting-edge technologies. Our purpose is anchored in bringing real positive changes in an increasingly virtual world, transcending generational gaps and future disruptions. We are currently seeking SQL Professionals for the role of Data Engineer with 4-6 years of experience. The ideal candidate must have a strong academic background. As a Data Engineer at BNY Mellon in Pune, you will be responsible for designing, developing, and maintaining scalable data pipelines and ETL processes using Apache Spark and SQL. You will collaborate with data scientists and analysts to understand data requirements, optimize and query large datasets, ensure data quality and integrity, implement data governance and security best practices, participate in code reviews, and troubleshoot data-related issues promptly. Qualifications for this role include 4-6 years of experience in data engineering, proficiency in SQL and data processing frameworks like Apache Spark, knowledge of database technologies such as SQL Server or Oracle, experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data warehousing solutions, understanding of Python, Scala, or Java for data manipulation, excellent analytical and problem-solving skills, and good communication skills to work effectively in a team environment. Joining YASH means being empowered to shape your career in an inclusive team environment. We offer career-oriented skilling models and promote continuous learning, unlearning, and relearning at a rapid pace. Our workplace is based on four principles: flexible work arrangements, free spirit, and emotional positivity; agile self-determination, trust, transparency, and open collaboration; all support needed for the realization of business goals; and stable employment with a great atmosphere and ethical corporate culture.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an AWS Data Engineer, you should have at least 3 years of experience in AWS Data Engineering. Your main responsibilities will include designing and building ETL pipelines and Data lakes to automate the ingestion of both structured and unstructured data. You will need to be proficient in working with AWS big data technologies such as Redshift, S3, AWS Glue, Kinesis, Athena, DMS, EMR, and Lambda for Serverless ETL processes. Knowledge in SQL and NoSQL programming languages is essential, along with experience in batch and real-time pipelines. Your role will require excellent programming and debugging skills in either Scala or Python, as well as expertise in Spark. You should have a good understanding of Data Lake formation, Apache Spark, Python, and hands-on experience in deploying models. Experience in Production migration processes is a must, and it would be advantageous to have familiarity with Power BI visualization tools and connectivity. In this position, you will be tasked with designing, building, and operationalizing large-scale enterprise data solutions and applications. You will also need to analyze, re-architect, and re-platform on-premise data warehouses to data platforms within the AWS cloud environment. Creating production data pipelines from ingestion to consumption using Python or Scala within the AWS big data architecture will be part of your routine. Additionally, you will be responsible for conducting detailed assessments of current state data platforms and developing suitable transition paths to the AWS cloud. If you possess strong data engineering skills and are looking for a challenging role in AWS Data Engineering, this opportunity may be the right fit for you.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
bhubaneswar
On-site
As a Pyspark Developer_VIS, your primary responsibility will be to develop high-performance Pyspark applications for large-scale data processing. You will collaborate with data engineers and analysts to integrate data pipelines and design ETL processes using Pyspark. Optimizing existing data models and workflows to enhance overall performance is also a key aspect of your role. Additionally, you will need to analyze large datasets to derive actionable insights and ensure data quality and integrity throughout the data processing lifecycle. Utilizing SQL for querying databases and validating data is essential, along with working with cloud technologies to deploy and maintain data solutions. You will participate in code reviews, maintain version control, and document all processes, workflows, and system changes clearly. Providing support in resolving production issues and assisting stakeholders, as well as mentoring junior developers on best practices in data processing, are also part of your responsibilities. Staying updated on emerging technologies and industry trends, implementing data security measures, contributing to team meetings, and offering insights for project improvements are other expectations from this role. Qualifications required for this position include a Bachelor's degree in Computer Science, Engineering, or a related field, along with 3+ years of experience in Pyspark development and data engineering. Strong proficiency in SQL and relational databases, experience with ETL tools and data processing frameworks, familiarity with Python for data manipulation and analysis, and knowledge of big data technologies such as Apache Hadoop and Spark are necessary. Experience working with cloud platforms like AWS or Azure, understanding data warehousing concepts and strategies, excellent problem-solving and analytical skills, attention to detail, commitment to quality, ability to work independently and as part of a team, excellent communication and interpersonal skills, experience with version control systems like Git, managing multiple priorities in a fast-paced environment, willingness to learn and adapt to new technologies, strong organizational skills, and meeting deadlines are also essential for this role. In summary, the ideal candidate for the Pyspark Developer_VIS position should possess a diverse skill set including cloud technologies, big data, version control, data warehousing, Pyspark, ETL, Python, Azure, Apache Hadoop, data analysis, Apache Spark, SQL, AWS, and more. ,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a skilled and seasoned Senior Data Engineer to become a valued member of our innovative team. The ideal candidate should possess a solid foundation in data engineering and demonstrate proficiency in Azure, particularly Azure Data Factory (ADF), Azure Fabric, Databricks, and Snowflake. In this role, you will be responsible for the design, construction, and upkeep of data pipelines, ensuring data quality and accessibility, as well as collaborating with various teams to support our data-centric initiatives. Your responsibilities will include crafting, enhancing, and sustaining robust data pipelines utilizing tools such as Azure Data Factory, Azure Fabric, Databricks, and Snowflake. Moreover, you will work closely with data scientists, analysts, and stakeholders to comprehend data requirements, guarantee data availability, and maintain data quality. Implementing and refining ETL processes to efficiently ingest, transform, and load data from diverse sources into data warehouses, data lakes, and Snowflake will also be part of your role. Furthermore, you will play a crucial role in ensuring data integrity and security by adhering to best practices and data governance policies. Monitoring and rectifying data pipelines for timely and accurate data delivery, as well as optimizing data storage and retrieval processes to enhance performance and scalability, will be among your key responsibilities. Staying abreast of industry trends and best practices in data engineering and cloud technologies is essential, along with mentoring and providing guidance to junior data engineers. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Additionally, you must have over 5 years of experience in data engineering, with a strong emphasis on Azure, ADF, Azure Fabric, Databricks, and Snowflake. Proficiency in SQL, experience in data modeling and database design, and solid programming skills in Python, Scala, or Java are prerequisites. Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka, as well as a sound grasp of data warehousing concepts and solutions, including Azure Synapse Analytics and Snowflake, are highly desirable. Knowledge of data governance, data quality, and data security best practices, exceptional problem-solving skills, and effective communication and collaboration abilities within a team setting are essential. Preferred qualifications include experience with other Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Cosmos DB, familiarity with DevOps practices and tools for CI/CD in data engineering, and certifications in Azure Data Engineering, Snowflake, or related areas.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
You will be responsible for exploring and visualizing data to gain insights and identify differences in data distribution that may impact model performance in real-world deployment. You will verify data quality through data cleaning and work efficiently to enhance decision-making for stakeholders using Machine Learning. Your role will involve designing and developing Machine Learning systems, conducting statistical analysis, and fine-tuning models based on test results. Additionally, you will be required to train and retrain ML systems as needed, deploy models in production, and manage cloud infrastructure costs. Developing Machine Learning applications tailored to client and data scientist needs will be part of your responsibilities, as well as evaluating the problem-solving capabilities of ML algorithms and ranking them based on their success in meeting objectives. In terms of technical knowledge, you should have experience in addressing real-time problems using ML and deep learning models deployed in real-time, with a portfolio of impactful projects. Proficiency in Python and familiarity with Jupyter Framework, Google Colab, and cloud-hosted notebooks like AWS SageMaker and DataBricks is essential. You should also be well-versed in libraries such as Scikit-learn, TensorFlow, OpenCV2, PySpark, Pandas, NumPy, and related tools, and have expertise in visualizing and manipulating complex datasets using libraries like Seaborn, Plotly, and Matplotlib. Strong knowledge of linear algebra, statistics, and probability for Machine Learning is required, along with proficiency in ML algorithms (e.g., Gradient Boosting, stacked ML, classification algorithms, and deep learning) and experience in hyperparameter tuning and model performance comparison. Moreover, familiarity with big data technologies like the Hadoop stack and Spark, basic cloud usage (e.g., VMs like EC2), and bonus points for Kubernetes and Task Queues will be advantageous. Effective written and verbal communication skills, along with experience working in an Agile environment, are essential for this role. Key Skills: Python, Big Data, Apache Spark, Machine Learning (ML), Deep Learning.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should be a skilled and experienced Spark Scala Developer with a strong expertise in AWS cloud services and SQL to join our data engineering team. Your primary responsibility will be to design, build, and optimize scalable data processing systems that support our data platform. Your key responsibilities will include developing and maintaining large-scale distributed data processing pipelines using Apache Spark with Scala, working with AWS services (S3, EMR, Lambda, Glue, Redshift, etc.) to build and manage data solutions in the cloud, writing complex SQL queries for data extraction, transformation, and analysis, optimizing Spark jobs for performance and cost-efficiency, collaborating with data scientists, analysts, and other developers to understand data requirements, building and maintaining data lake and data warehouse solutions, implementing best practices in coding, testing, and deployment, and ensuring data quality and consistency across systems. To be successful in this role, you should have strong hands-on experience with Apache Spark (preferably using Scala), proficiency in the Scala programming language, solid experience with SQL (including complex joins, window functions, and performance tuning), working knowledge of AWS services like S3, EMR, Glue, Lambda, Athena, Redshift, experience in building and maintaining ETL/ELT pipelines, familiarity with data modeling and data warehousing concepts, experience with version control (e.g., Git) and CI/CD pipelines is a plus, and strong problem-solving and communication skills.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Azure Data Engineer at CGI, you will have the opportunity to be part of a dynamic team of builders who are dedicated to helping clients succeed. With our global resources, expertise, and stability, we aim to achieve results for our clients and members. If you are looking for a challenging role that offers professional growth and development, this is the perfect opportunity for you. In this role, you will be responsible for supporting the development and maintenance of our trading and risk data platform. Your main focus will be on designing and building data foundations and end-to-end solutions to maximize the value from data. You will collaborate with other data professionals to integrate and enrich trade data from various ETRM systems and create scalable solutions to enhance the usage of TRM data across different platforms and teams. Key Responsibilities: - Implement and manage lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL). - Utilize SQL, Python, Apache Spark, and Delta Lake for data engineering tasks. - Implement data integration techniques, ETL processes, and data pipeline architectures. - Develop CI/CD pipelines for code management using GIT. - Create and maintain technical documentation for the platform. - Ensure the platform is developed with software engineering, data analytics, and data security best practices. - Optimize data processing and storage systems for high performance, reliability, and security. - Work in Agile Methodology and utilize ADO Boards for Sprint deliveries. - Demonstrate excellent communication skills to convey technical and business concepts effectively. - Collaborate with team members at all levels to share ideas and knowledge effectively. Required Qualifications: - Bachelor's degree in computer science or related field. - 6 to 10 years of experience in software development/engineering. - Proficiency in Azure technologies including Databricks, ADLS Gen2, ADF, and Azure SQL. - Strong hands-on experience with SQL, Python, Apache Spark, and Delta Lake. - Knowledge of data integration techniques, ETL processes, and data pipeline architectures. - Experience in building CI/CD pipelines and using GIT for code management. - Familiarity with Agile Methodology and ADO Boards for Sprint deliveries. At CGI, we believe in ownership, teamwork, respect, and belonging. As a CGI Partner, you will have the opportunity to turn meaningful insights into action, develop innovative solutions, and collaborate with a diverse team to shape your career and contribute to our collective success. Join us on this exciting journey of growth and innovation at one of the largest IT and business consulting services firms in the world.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
BizViz is a company that offers a comprehensive view of a business's data, catering to various industries and meeting the diverse needs of business executives. With a dedicated team of over 50 professionals working on the BizViz platform for several years, the company aims to develop technological solutions that provide our clients with a competitive advantage. At BizViz, we are committed to the success of our customers, striving to create applications that align with their unique visions and requirements. We steer clear of generic ERP templates, offering businesses a more tailored solution. As a Big Data Engineer at BizViz, you will join a small, agile team of data engineers focused on building an innovative big data platform for enterprises dealing with critical data management and diverse application stakeholders at scale. The platform handles data ingestion, warehousing, and governance, allowing developers to create complex queries efficiently. With features like automatic scaling, elasticity, security, logging, and data provenance, our platform empowers developers to concentrate on algorithms rather than administrative tasks. We are seeking engineers who are eager for technical challenges, to enhance our current platform for existing clients and develop new capabilities for future customers. Key Responsibilities: - Work as a Senior Big Data Engineer within the Data Science Innovation team, collaborating closely with internal and external stakeholders throughout the development process. - Understand the needs of key stakeholders to enhance or create new solutions related to data and analytics. - Collaborate in a cross-functional, matrix organization, even in ambiguous situations. - Contribute to scalable solutions using large datasets alongside other data scientists. - Research innovative data solutions to address real market challenges. - Analyze data to provide fact-based recommendations for innovation projects. - Explore Big Data and other unstructured data sources to uncover new insights. - Partner with cross-functional teams to develop and execute business strategies. - Stay updated on advancements in data analytics, Big Data, predictive analytics, and technology. Qualifications: - BTech/MCA degree or higher. - Minimum 5 years of experience. - Proficiency in Java, Scala, Python. - Familiarity with Apache Spark, Hadoop, Hive, Spark SQL, Spark Streaming, Apache Kafka. - Knowledge of Predictive Algorithms, Mllib, Cassandra, RDMS (MYSQL, MS SQL, etc.), NOSQL, Columnar Databases, Big table. - Deep understanding of search engine technology, including Elasticsearch/Solr. - Experience in Agile development practices such as Scrum. - Strong problem-solving skills for designing algorithms related to data cleaning, mining, clustering, and pattern recognition. - Ability to work effectively in a matrix-driven organization under varying circumstances. - Desirable personal qualities: creativity, tenacity, curiosity, and a passion for technical excellence. Location: Bangalore To apply for this position, interested candidates can send their applications to careers@bdb.ai.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
jaipur, rajasthan
On-site
As a Databricks Engineer specializing in the Azure Data Platform, you will be responsible for designing, developing, and optimizing scalable data pipelines within the Azure ecosystem. You should have hands-on experience with Python-based ETL development, Lakehouse architecture, and building Databricks workflows utilizing the bronze-silver-gold data modeling approach. Your key responsibilities will include developing and maintaining ETL pipelines using Python and Apache Spark in Azure Databricks, implementing and managing bronze-silver-gold data lake layers using Delta Lake, and working with various Azure services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for end-to-end pipeline orchestration. It will be crucial to ensure data quality, integrity, and lineage across all layers of the data pipeline, optimize Spark performance, manage cluster configurations, and schedule jobs effectively in Databricks. Collaboration with data analysts, architects, and business stakeholders to deliver data-driven solutions will also be part of your role. To be successful in this role, you should have at least 3+ years of experience with Python in a data engineering environment, 2+ years of hands-on experience with Azure Databricks and Apache Spark, and a strong background in building scalable data lake pipelines following the bronze-silver-gold architecture. Additionally, in-depth knowledge of Delta Lake, Parquet, and data versioning, along with familiarity with Azure Data Factory, ADLS Gen2, and SQL is required. Experience with CI/CD pipelines and job orchestration tools such as Azure DevOps or Airflow would be advantageous. Excellent communication skills, both verbal and written, are essential. Nice to have qualifications include experience with data governance, security, and monitoring in Azure, exposure to real-time streaming or event-driven pipelines (Kafka, Event Hub), and an understanding of MLflow, Unity Catalog, or other data cataloging tools. By joining our team, you will have the opportunity to be part of high-impact, cloud-native data initiatives, work in a collaborative and growth-oriented team focused on innovation, and contribute to modern data architecture standards using the latest Azure technologies. If you are ready to advance your career as a Databricks Engineer in the Azure Data Platform, please send your updated resume to hr@vidhema.com. We look forward to hearing from you and potentially welcoming you to our team.,
Posted 3 weeks ago
12.0 - 17.0 years
25 - 40 Lacs
Hyderabad, Pune, Chennai
Work from Office
Job Description 15 to 18 years. with at-least 3 to 4 years as expertise in ETL, data engineering and Cloud Technologies, with a proven ability to orchestrate cutting-edge technology to connect various applications within the cloud environment in a large development project. Primary Technical Skills: ETL, Apache Spark, AWS EMR, EKS, Serverless, Data Engineering, Distributed Computing, Data Lineage, Apache Airflow, Java 17+, Springboot/ Quarkus, Hibernate ORM, REST, Postgres or any RDBMS, Microservices, Cloud-native development, Secondary Technical Skills Devops : Docker, Kubernetes, CI / CD stack Jenkins or Gitlab CI, Maven, GIT, SonarQube, Nexus, AWS, expertise in at least one Data Engineering tools (e.g. Informatica, Data stage) Apache Airflow, Redis, No-SQL (any Document DB), Kafka / Rabbit MQ, OAUTH2, ARGO, Swagger, OAS Experience / Application of skills Experience in ETL implementation using Cloud technologies, Distributed computing and Big data processing. Orchestrate the integration of Cloud-native principles, Kubernetes, Micro profile specs, Spark framework. Hands-on Java Lead. Strong in OOPs concepts, Java design patterns, Reactive programming, writing High level solutions & Clean architecture. Very strong advocate of coding best practices (SOLID, DRY, Clean Code, Exception handling, TDD, Unit testing, Integration testing). Have implemented common framework for an application/platform (like Exception Library, Security Authentication/Authorization, Auditing, Idempotency, Connectors etc) Experience in implementing HLD, Microservices architecture, design patterns like Resiliency, Service Orchestration, DB per service, CQRS etc. Preferred personal qualities Proactive, Self-starter , Willing to learn new technology. Develop rapid prototypes/PoC/MVP in data integration within cloud environments. Working with team members, mentoring, and guiding them in their career track Excellent problem-solving skills and ability to work in a fast-paced environment. Stay updated with the latest advancements in cloud and data technologies, as well as best practices. Strong leadership and communication skills. Role Engineering Lead Data Engineering Architect Shift : General shift Location : Chennai , Hyderabad and Pune. Those who are interested can drop their resumes at Krishna.Kumaravel@ltimindtree.com
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City