Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
0 - 1 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Assist a team of engineers to provide production support and troubleshooting for 24/7 manufacturing operations. Solve customer problems by analyzing situations to find common issues, identifying root causes, and developing effective solutions. Document applications and solutions. Respond to customer requests. Participate in project teams and represent interests in area(s) of responsibility (e.g., screening, measurements, warehouse, commercial systems, etc.). Lead local IT projects. Arrange/conduct quality assurance testing on technology. Ensure that confidentiality and protection of information is built into applications and processes. Respond to customers and IT leaders questions about a process or technology. Create IT processes. Keep up with the latest technologies and suggest how to apply the technology in business. Serve on weekend production support team approximately every 5-6 weeks. Development Value: Responsible for IMS systems in the plant. Key interface to site leader and division/corporate project leaders. Corning IT is recognized as a Top 100 IT Environment (per various trade surveys). Corning IT has a strong presence in the Pan-Asia region with locations in China, India, Japan, and Taiwan. Potential movement to regional/corporate IT roles/SME. Required Experiences/Education: 3-5 years of related IT experience. Excellent programming skills (3+ years experience): C#, VB, ASP.Net, Microsoft SQL Server, RBDMS, Visual Studio. Experience with non-IMS Corning toolsets (e.g., iFIX SCADA, real-time controls, Microsoft Office, Microsoft OS, MSSQL, Oracle). Experience with Power BI/Tableau, advanced reporting systems, ETL, etc. Understanding of the Data Lifecycle Management and ELT/ETL processes to collect, access, use, store, transfer, delete data. Knowledge and/or experience working on MES (Camstar or any Manufacturing Execution System). Exposure to AI ML projects and advanced Python library usage. Experience with Data Bricks Kairos data lakes. Disciplined approach to development with full life-cycle project experience. Disciplined approach to requirements analysis with a software engineering or computer science background. Experience working within large/strategic and small/tactical projects. Experience in a manufacturing plant environment. Good written and verbal English skills. Required Knowledge: Bachelor's degree in computer engineering, E TC (Full-time from a reputed engineering college, no drops in education). Desired Experiences/Education: Multi-National Corporation experience. Exposure to projects. Performance excellence experience. Experience with big data, data engineering, and maintaining data pipelines. Technical familiarity with cloud data platforms like Databricks, S3, Parquet, and Delta Lake. Working knowledge of ML and MLOps. Working knowledge of scripting languages (Powershell, Java, etc.). Experience with GenAI and prompt engineering. Self-motivated with a passion for digital technologies.
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
As a Data Modeler, your primary responsibility will be to design and develop conceptual, logical, and physical data models supporting enterprise data initiatives. You will work with modern storage formats like Parquet and ORC, and build and optimize data models within Databricks Unity Catalog. Collaborating with data engineers, architects, analysts, and stakeholders, you will ensure alignment with ingestion pipelines and business goals. Translating business and reporting requirements into robust data architecture, you will follow best practices in data warehousing and Lakehouse design. Your role will involve maintaining metadata artifacts, enforcing data governance, quality, and security protocols, and continuously improving modeling processes. You should have over 10 years of hands-on experience in data modeling within Big Data environments. Your expertise should include OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices. Proficiency in modeling methodologies like Kimball, Inmon, and Data Vault is essential. Hands-on experience with modeling tools such as ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Experience in Databricks with Unity Catalog and Delta Lake is required, along with a strong command of SQL and Apache Spark for querying and transformation. Familiarity with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database, is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are necessary for this role, as well as the ability to work in cross-functional agile environments. A Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field is required. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are a plus. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks like GDPR and HIPAA are advantageous.,
Posted 1 day ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
???? Were Hiring: Senior Data Engineer 7+ Years Experience ???? Location: Gurugram, Haryana, India ???? Duration: 6 Months C2H (Contract to Hire) ???? Apply Now: [HIDDEN TEXT] ???? What Were Looking For: ? 7+ years of experience in data engineering ? Strong expertise in building scalable, robust batch and real-time data pipelines ? Proficiency in AWS Data Services (S3, Glue, Athena, EMR, Kinesis, etc.) ? Advanced SQL skills and deep knowledge of file formats: Parquet, Delta Lake, Iceberg, Hudi ? Hands-on experience with CDC patterns ? Experience with stream processing (Apache Flink , Kafka Streams) and distributed frameworks like PySpark ? Expertise in Apache Airflow for workflow orchestration ? Solid foundation in data warehousing concepts and experience with both relational and NoSQL databases ? Strong communication and problem-solving skills ? Passion for staying up to date with the latest in the data tech landscape Show more Show less
Posted 2 days ago
7.0 - 13.0 years
10 - 17 Lacs
Gurgaon, Haryana, India
On-site
About us -Coders Brain is a global leader in its services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers. We achieved our success because of how successfully we integrate with our clients. Quick Implementation - We offer quick implementation for the new onboarding client. Experienced Team - We've built an elite and diverse team that brings its unique blend of talent, expertise, and experience to make you more successful, ensuring our services are uniquely customized to your specific needs. One Stop Solution - Coders Brain provides end-to-end solutions for the businesses at an affordable price with uninterrupted and effortless services. Ease of Use - All of our products are user friendly and scalable across multiple platforms. Our dedicated team at Coders Brain implements keeping the interest of enterprise and users in mind. Secure - We understand and treat your security with utmost importance. Hence we blend security and scalability in our implementation considering long term impact on business benefit. Exp- 7+ Yrs Role- Data Engineer Location- Gurgaon Permanent-Codersbrain Technology Pvt Ltd Client:- EMB Global JobDescription Must Have: ? Excellent communication skills with the ability to interact directly with customers. ? Azure/AWS Databricks. ? Python / Scala / Spark / PySpark. ? Strong SQL and RDBMS expertise. ? HIVE / HBase / Impala / Parquet. ? Sqoop, Kafka, Flume. ? Airflow ? Jenkins or Bamboo. ? Github or Bitbucket. ? Nexus. Good to Have: ? Relevant accredited certifications for Azure, AWS, Cloud Engineering, and/or Databricks. ? Knowledge of Delta Live Tables (DLT). If you're interested then please share the below-mentioned details : oCurrent CTC: oExpected CTC: oCurrent Company: oNotice Period: oCurrent Location: oPreferred Location: oTotal-experience: oRelevant experience: oHighest qualification: oDOJ(If Offer in Hand from Other company):
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
You should have familiarity with modern storage formats like Parquet and ORC. Your responsibilities will include designing and developing conceptual, logical, and physical data models to support enterprise data initiatives. You will build, maintain, and optimize data models within Databricks Unity Catalog, developing efficient data structures using Delta Lake to optimize performance, scalability, and reusability. Collaboration with data engineers, architects, analysts, and stakeholders is essential to ensure data model alignment with ingestion pipelines and business goals. You will translate business and reporting requirements into a robust data architecture using best practices in data warehousing and Lakehouse design. Additionally, maintaining comprehensive metadata artifacts such as data dictionaries, data lineage, and modeling documentation is crucial. Enforcing and supporting data governance, data quality, and security protocols across data ecosystems will be part of your role. You will continuously evaluate and improve modeling processes. The ideal candidate will have 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices is required. Proficiency in modeling methodologies including Kimball, Inmon, and Data Vault is expected. Hands-on experience with modeling tools like ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Proven experience in Databricks with Unity Catalog and Delta Lake is necessary, along with a strong command of SQL and Apache Spark for querying and transformation. Experience with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are required, with the ability to work in cross-functional agile environments. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are desirable. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) are also advantageous.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. As an ETL Technical Lead at Orion Innovation located in Chennai, you are required to have at least 5 years of ETL experience and 3 years of experience specifically in Azure Synapse. Your role will involve designing, developing, and managing ETL processes within the Azure ecosystem. You must possess proficiency with Azure Synapse Pipelines, Azure Dedicated SQL Pool, Azure Data Lake Storage (ADLS), and other related Azure services. Additionally, experience with audit logging, data governance, and implementing data integrity and data lineage best practices is essential. Your responsibilities will include leading and managing the ETL team, providing mentorship, technical guidance, and driving the delivery of key data initiatives. You will design, develop, and maintain ETL pipelines using Azure Synapse Pipelines for ingesting data from various file formats and securely storing them in Azure Data Lake Storage (ADLS). Furthermore, you will architect, implement, and manage data solutions following the Medallion architecture for effective data processing and transformation. It is crucial to leverage Azure Data Lake Storage (ADLS) to build scalable and high-performance data storage solutions, ensuring optimal data lake management. You will also be responsible for managing the Azure Dedicated SQL Pool to optimize query performance and scalability. Automation of data workflows and processes using Logic Apps, as well as ensuring secure and compliant data handling through audit logging and access controls, will be part of your duties. Collaborating with data scientists to integrate ETL pipelines with Machine Learning models for predictive analytics and advanced data science use cases is key. Troubleshooting and resolving complex data pipeline issues, monitoring and optimizing performance, and acting as the primary technical point of contact for the ETL team are also essential aspects of this role. Orion Systems Integrators, LLC and its affiliates are committed to protecting your privacy. For more information on the Candidate Privacy Policy, please refer to the official documentation on the Orion website.,
Posted 4 days ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Palantir Foundry Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Project Role:Lead Data Engineer Project Role Description:Design, build and enhance applications to meet business process and requirements in Palantir foundry.Work experience:Minimum 6 years Must have Skills: Palantir Foundry, PySparkGood to Have Skills: Experience in PySpark, python and SQLKnowledge on Big Data tools & TechnologiesOrganizational and project management experience.Job Requirements & Key Responsibilities:Responsible for designing, developing, testing, and supporting data pipelines and applications on Palantir foundry.Configure and customize Workshop to design and implement workflows and ontologies.Collaborate with data engineers and stakeholders to ensure successful deployment and operation of Palantir foundry applications.Work with stakeholders including the product owner, data, and design teams to assist with data-related technical issues and understand the requirements and design the data pipeline.Work independently, troubleshoot issues and optimize performance.Communicate design processes, ideas, and solutions clearly and effectively to team and client. Assist junior team members in improving efficiency and productivity.Technical Experience:Proficiency in PySpark, Python and SQL with demonstrable ability to write & optimize SQL and spark jobs.Hands on experience on Palantir foundry related services like Data Connection, Code repository, Contour, Data lineage & Health checks.Good to have working experience with workshop, ontology, slate.Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry.Experience in ingesting data from different external source systems using data connections and sync.Good Knowledge on Spark Architecture and hands on experience on performance tuning & code optimization.Proficient in managing both structured and unstructured data, with expertise in handling various file formats such as CSV, JSON, Parquet, and ORC.Experience in developing and managing scalable architecture & managing large data sets.Good understanding of data loading mechanism and adeptly implement strategies for capturing CDC.Nice to have test driven development and CI/CD workflows.Experience in version control software such as Git and working with major hosting services (e. g. Azure DevOps, GitHub, Bitbucket, Gitlab).Implementing code best practices involves adhering to guidelines that enhance code readability, maintainability, and overall quality.Educational Qualification:15 years of full-term education Qualification 15 years full time education
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Azure Data Engineer within our team, you will play a crucial role in enhancing and supporting existing Data & Analytics solutions by utilizing Azure Data Engineering technologies. Your primary focus will be on developing, maintaining, and deploying IT products and solutions that cater to various business users, with a strong emphasis on performance, scalability, and reliability. Your responsibilities will include incident classification and prioritization, log analysis, coordination with SMEs, escalation of complex issues, root cause analysis, stakeholder communication, code reviews, bug fixing, enhancements, and performance tuning. You will design, develop, and support data pipelines using Azure services, implement ETL techniques, cleanse and transform datasets, orchestrate workflows, and collaborate with both business and technical teams. To excel in this role, you should possess 3 to 6 years of experience in IT and Azure data engineering technologies, with a strong command over Azure Databricks, Azure Synapse, ADLS Gen2, Python, PySpark, SQL, JSON, Parquet, Teradata, Snowflake, Azure DevOps, and CI/CD pipeline deployments. Knowledge of Data Warehousing concepts, data modeling best practices, and familiarity with SNOW (ServiceNow) will be advantageous. In addition to technical skills, you should demonstrate the ability to work independently and in virtual teams, strong analytical and problem-solving abilities, experience in Agile practices, effective task and time management, and clear communication and documentation skills. Experience with Business Intelligence tools, particularly Power BI, and possessing the DP-203 certification (Azure Data Engineer Associate) will be considered a plus. Join us in Chennai, Tamilnadu, India, and be part of our dynamic team working in the FMCG/Foods/Beverage domain.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You are a Java Developer with AI/ML experience, required to have at least 5+ years of industry experience in Java, Spring Boot, Spring Data, and a minimum of 2 years of AI/ML project or professional experience. You should possess a strong background in developing and consuming REST APIs and asynchronous messaging using technologies like Kafka or RabbitMQ. Your role involves integrating AI/ML models into Java services or making calls to external ML endpoints. You need to have a comprehensive understanding of the ML lifecycle encompassing training, validation, inference, monitoring, and retraining. Familiarity with tools such as TensorFlow, PyTorch, Scikit-Learn, or ONNX is essential. Previous experience in implementing domain-specific ML solutions like fraud detection, recommendation systems, or NLP chatbots is beneficial. Proficiency in working with various data formats including JSON, Parquet, Avro, and CSV is required. You should have a solid grasp of both SQL (PostgreSQL, MySQL) and NoSQL (Redis) database systems. Your responsibilities will include integrating machine learning models (both batch and real-time) into backend systems and APIs, optimizing and automating AI/ML workflows using MLOps best practices, and monitoring model performance, versioning, and rollbacks. Collaboration with cross-functional teams such as DevOps, SRE, and Product Engineering is necessary to ensure smooth deployment. Exposure to MLOps tools like MLflow, Kubeflow, or Seldon is desired. Experience with at least one cloud platform, preferably AWS, and knowledge of observability tools, metrics, events, logs, and traces (e.g., Prometheus, Grafana, Open Telemetry, Splunk, Data Dog, App Dynamics) are valuable skills in this role. Thank you. Aatmesh,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Data Engineer to join our team, working on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, building scalable and efficient data pipelines on Azure. Responsibilities include designing, building, and maintaining scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark. Developing and optimizing data workflows using SQL and Python or Scala for large-scale data processing and transformation. Implementing performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling. Collaborating with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows. Ensuring data quality and integrity by implementing validation, error-handling, and monitoring mechanisms. Working with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem. Contributing to MLOps practices, including integrating ML pipelines, managing model versioning, and supporting CI/CD processes. Primary Skills required are Data Engineering & Cloud proficiency in Azure Data Platform (Data Factory, Databricks), strong skills in SQL and either Python or Scala for data manipulation, experience with ETL/ELT pipelines and data transformations, familiarity with Big Data technologies (Spark, Delta Lake, Parquet), expertise in data pipeline optimization and performance tuning, experience in feature engineering and model deployment, strong troubleshooting and problem-solving skills, experience with data quality checks and validation. Nice-to-Have Skills include exposure to NLP, time-series forecasting, and anomaly detection, familiarity with data governance frameworks and compliance practices, basics of AI/ML like ML & MLOps Integration, experience supporting ML pipelines with efficient data workflows, knowledge of MLOps practices (CI/CD, model monitoring, versioning). At Tesco, we are committed to providing the best for our colleagues. Total Rewards offered at Tesco are determined by four principles - simple, fair, competitive, and sustainable. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays. Tesco promotes programs supporting health and wellness, including insurance for colleagues and their family, mental health support, financial coaching, and physical wellbeing facilities on campus. Tesco in Bengaluru is a multi-disciplinary team serving customers, communities, and the planet. The goal is to create a sustainable competitive advantage for Tesco by standardizing processes, delivering cost savings, enabling agility through technological solutions, and empowering colleagues. Tesco Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India, dedicated to various roles including Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and others.,
Posted 1 week ago
4.0 - 8.0 years
14 - 17 Lacs
Gurugram
Hybrid
Position level: AI Specialist/Software engineering professional (Cant be a junior/fresher, needs to be middle to senior level person) Work Experience: Ideally, 4 to 5 years of working as a Data Scientist / Machine Learning and AI at a managerial position (end-to-end project responsibility). Slightly lower work experience can be considered based on the skill level of the candidate. About the job: Use AI-ML to work with data to predict process behaviors. Stay abreast of industry trends, emerging technologies, and best practices in data science, and provide recommendations for adopting innovative approaches within the product teams. In addition, championing a data-driven culture, promoting best practices, knowledge sharing, and collaborative problem-solving. Abilities: Knowledge about data analysis, Artificial Intelligence (AI), Machine Learning (ML), and preparation of test reports to show results of tests. Strong in communication with a collaborative attitude, not afraid to take responsibility and make decisions, open to new learning, and adapt. Experience with end-to-end process and used to make result presentation to customers. Technical Requirements: Experience working with real world messy data (time series, sensors, etc.) Familiarity with Machine learning and statistical modelling Ability to interpret model results in business context Knowledge of Data preprocessing (feature engineering, outlier handling, etc.) Soft Skill Requirements: Analytical thinking Ability to connect results to business or process understanding Communication skills Comfortable explaining complex topics to stakeholders Structured problem solving – Able to define and execute a structured way to reach results Autonomous working style – can drive a small project or parts of a project Tool Knowledge: Programming: Python (Common core libraries: pandas, numpy, scikit-learn, matplotlib, mlfow etc.); Knowledge of best practices (PEP8, code structure, testing, etc.) Code versioning (GIT) Data Handling: SQL; Understanding of data format (CSV, JSON, Parquet); Familiarity with time series data handling Infrastructure: Basic Cloud technology knowledge (Azure (preferred), AWS, GCP); Basic Knowledge of MLOps workflow Good to have: Knowledge of Azure ML, AWS SageMaker; Knowledge of MLOps best practices in any tool; Containerization and deployment (Docker, Kubernetes) Languages: English – Proficient/Fluent Location: Hybrid (WFO+WFH) + Availability to visit customer sites for meetings and work-related responsibilities as per the project requirement.
Posted 1 week ago
2.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Tiger Analytics is a global AI and analytics consulting firm with a team of over 2800 professionals focused on using data and technology to solve complex problems that impact millions of lives worldwide. Our culture is centered around expertise, respect, and a team-first mindset. Headquartered in Silicon Valley, we have delivery centers globally and offices in various cities across India, the US, UK, Canada, and Singapore, along with a significant remote workforce. At Tiger Analytics, we are certified as a Great Place to Work. Joining our team means being at the forefront of the AI revolution, working with innovative teams that push boundaries and create inspiring solutions. We are currently looking for an Azure Big Data Engineer to join our team in Chennai, Hyderabad, or Bangalore. As a Big Data Engineer (Azure), you will be responsible for building and implementing various analytics solutions and platforms on Microsoft Azure using a range of Open Source, Big Data, and Cloud technologies. Your typical day might involve designing and building scalable data ingestion pipelines, processing structured and unstructured data, orchestrating pipelines, collaborating with teams and stakeholders, and making critical tech-related decisions. To be successful in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be proficient in technologies such as Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Strong coding skills in SQL, Python, or Scala/Java are essential, as well as experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, and Elastic Search. Knowledge of file formats such as Delta Lake, Avro, Parquet, JSON, and CSV is also required. Ideally, you should have experience in building REST APIs, working on Data Lake or Lakehouse projects, supporting BI and Data Science teams, and following Agile and DevOps processes. Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE) would be a valuable addition to your profile. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with different skills and qualities to apply, even if they do not meet all the criteria for the role. We are committed to providing equal opportunities and fostering a culture of listening, trust, respect, and growth. Please note that the job designation and compensation will be based on your expertise and experience, and our compensation packages are competitive within the industry. If you are passionate about leveraging data and technology to drive impactful solutions, we would love to stay connected with you.,
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
About Calfus: Calfus is a Silicon Valley headquartered software engineering and platforms company with a vision deeply rooted in the Olympic motto "Citius, Altius, Fortius Communiter". At Calfus, we aim to inspire our team to rise faster, higher, and stronger while fostering a collaborative environment to build software at speed and scale. Our primary focus is on creating engineered digital solutions that drive positive impact on business outcomes. Upholding principles of #Equity and #Diversity, we strive to create a diverse ecosystem that extends to the broader society. Join us at #Calfus and embark on an extraordinary journey with us! Position Overview: As a Data Engineer specializing in BI Analytics & DWH, you will be instrumental in crafting and implementing robust business intelligence solutions that empower our organization to make informed, data-driven decisions. Leveraging your expertise in Power BI, Tableau, and ETL processes, you will be responsible for developing scalable architectures and interactive visualizations. This role necessitates a strategic mindset, strong technical acumen, and effective collaboration with stakeholders across all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution aligning with business requirements, utilizing tools like Power BI and Tableau. - Data Integration: Supervise ETL processes through SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Establish and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Employ SQL for crafting intricate queries, stored procedures, and managing data transformations via joins and cursors. - Visualization Development: Spearhead the design of interactive dashboards and reports in Power BI and Tableau while adhering to best practices in data visualization. - Collaboration: Engage closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyze and optimize BI solutions for enhanced performance, scalability, and reliability. - Data Governance: Implement data quality and governance best practices to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts to cultivate a culture of continuous learning and improvement. - Azure Databricks: Utilize Azure Databricks for data processing and analytics to seamlessly integrate with existing BI solutions. Qualifications: - Bachelor's degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong emphasis on Power BI and Tableau. - Proficiency in ETL processes and tools, particularly SSIS. Strong command over SQL Server, encompassing advanced query writing and database management. - Proficient in exploratory data analysis using Python. - Familiarity with the CRISP-DM model. - Ability to work with various data models and databases like Snowflake, Postgres, Redshift, and MongoDB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and Dash. - Strong programming foundation in Python for data manipulation, analysis, serialization, database interaction, data pipeline and ETL tools, cloud services, and more. - Familiarity with Azure SDK is a plus. - Experience with code quality management, version control, collaboration in data engineering projects, and interaction with REST APIs and web scraping tasks is advantageous. Calfus Inc. is an Equal Opportunity Employer.,
Posted 1 week ago
7.0 - 12.0 years
10 - 20 Lacs
Kolkata
Hybrid
About the Role We are seeking a Senior Python Developer to lead the development of scalable, high-performance backend systems and data-driven applications. You will work on building robust services, APIs, and data workflows that power real-time and batch applications. This role is ideal for a seasoned developer who thrives on solving complex problems, building reliable software, and working across data and engineering teams. Youll play a key role in designing and developing Python-based systems that interface with big data frameworks, cloud platforms, and analytics tools. Key Responsibilities Design and develop robust Python-based backend services and microservices. Build RESTful APIs and integrations with third-party systems and internal tools. Work with data workflows involving ingestion, transformation, and validation. Develop and maintain ETL/ELT pipelines using Python, SQL, and Airflow. Collaborate with DevOps teams to deploy, monitor, and scale applications in cloud environments (AWS, Azure, GCP). Optimize code for performance, scalability, and maintainability. Write unit and integration tests, participate in code reviews, and follow CI/CD best practices. Work with cloud storage, databases, and data lake technologies like S3, Parquet, and DuckDB. Collaborate with data scientists, analysts, and engineers to enable data access and modeling. Required Skills & Qualifications 7+ years of experience in backend software development, primarily in Python. Strong understanding of OOP, modular design, and Python design patterns. Experience with web frameworks (e.g., Flask, FastAPI, Django). Hands-on experience with data processing tools like Pandas, SQLAlchemy, PyArrow. Proficiency with SQL and database technologies (PostgreSQL, DuckDB, etc.). Experience with Airflow for workflow orchestration. Knowledge of building scalable, distributed applications and microservices. Familiarity with cloud platforms (AWS, Azure, GCP) and containerization tools (Docker, Kubernetes). Solid understanding of version control (Git), testing frameworks (PyTest), and CI/CD pipelines. Excellent problem-solving, debugging, and communication skills. Preferred Skills (Nice to Have) Experience with data processing frameworks like Apache Spark or Vaex. Familiarity with data storage formats and tools like Parquet, Iceberg. Exposure to data streaming platforms (Kafka, Kinesis, Flink). Experience integrating backend systems with AI/ML pipelines or BI platforms. Understanding of security standards and compliance (GDPR, HIPAA, SOC2). Background in metadata-driven or event-driven architectures.
Posted 1 week ago
5.0 - 8.0 years
11 - 16 Lacs
Gurugram
Work from Office
Role Description Role Description: Senior Scala Data Engineer Scala Data Engineer needs to be able to understand existing code and help refactor, and migrate into new environment. Role and responsibilities * Read existing scala spark code. * Create unit tests for scala spark code * Enhance and Write scala spark code. * Proficient in working with S3 file with csv and parquet format. * Proficient in working with mongodb. Building up environments independently to test assigned work, Execute manual and automated tests. Experience with enterprise tools, like Git, Azure, TFS. Experience with JIRA or similar defect tracking tool. Engage and participate on an Agile team of a world-class software developers. Apply independence and creativity to problem solving across project assignments. Effectively communicate with team members, project managers and clients, as required. Core Skills: Scala Spark AWS Glue AWS Step Functions Maven Terraform Technical Skills Technical skills requirements The candidate must demonstrate proficiency in, Reading and writing scala spark code. Good programming knowledge using Scala and Python. SQL & BDD framework knowledge Experience in aws stack like S3, Glue, Step Functions * Experience in Agile/Scrum development Full SDLC from development to production deployment. Good Comm Skills.
Posted 1 week ago
5.0 - 10.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Summary: We are seeking a highly skilled and experienced Snowflake Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the administration, management, and optimization of our Snowflake data platform. The role requires strong expertise in database design, performance tuning, security, and data governance within the Snowflake environment. Key Responsibilities: Administer and manage Snowflake cloud data warehouse environments, including provisioning, configuration, monitoring, and maintenance. Implement security policies, compliance, and access controls. Manage Snowflake accounts and databases in a multi-tenant environment. Monitor the systems and provide proactive solutions to ensure high availability and reliability. Monitor and manage Snowflake costs. Collaborate with developers, support engineers and business stakeholders to ensure efficient data integration. Automate database management tasks and procedures to improve operational efficiency. Stay up to date with the latest Snowflake features, best practices, and industry trends to enhance the overall data architecture. Develop and maintain documentation, including database configurations, processes, and standard operating procedures. Support disaster recovery and business continuity planning for Snowflake environments. Required Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 5+ years of experience in Snowflake operations and administration. Strong knowledge of SQL, query optimization, and performance tuning techniques. Experience in managing security, access controls, and data governance in Snowflake. Familiarity with AWS. Proficiency in Python or Bash. Experience in automating database tasks using Terraform, CloudFormation, or similar tools. Understanding of data modeling concepts and experience working with structured and semi-structured data (JSON, Avro, Parquet). Strong analytical, problem-solving, and troubleshooting skills. Excellent communication and collaboration abilities. Preferred Qualifications: Snowflake certification (e.g., SnowPro Core, SnowPro Advanced: Architect, Administrator). Experience with CI/CD pipelines and DevOps practices for database management. Knowledge of machine learning and analytics workflows within Snowflake. Hands-on experience with data streaming technologies (Kafka, AWS Kinesis, etc.).
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our company, you will be responsible for building and maintaining secure, scalable data pipelines using Databricks and Azure. Your role will involve handling ingestion from diverse sources such as files, APIs, and streaming data, performing data transformation, and ensuring quality validation. Additionally, you will collaborate closely with subsystem data science and product teams to ensure ML readiness. To excel in this role, you should possess the following skills and experience: - Technical proficiency in Notebooks (SQL, Python), Delta Lake, Unity Catalog, ADLS/S3, job orchestration, APIs, structured logging, and IaC (Terraform). - Delivery expertise in trunk-based development, TDD, Git, and CI/CD for notebooks and pipelines. - Integration knowledge encompassing JSON, CSV, XML, Parquet, SQL/NoSQL/graph databases. - Strong communication skills enabling you to justify decisions, document architecture, and align with enabling teams. In return for your contributions, you will benefit from: - Proximity Talks: Engage with other designers, engineers, and product experts to learn from industry leaders. - Continuous learning opportunities: Work alongside a world-class team, challenge yourself daily, and expand your knowledge base. About Us: Proximity is a trusted technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. Headquartered in San Francisco, we also have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, our team at Proximity has developed high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. Join our diverse team of coders, designers, product managers, and experts at Proximity. We tackle complex problems and build cutting-edge tech solutions at scale. As part of our rapidly growing team of Proxonauts, your contributions will significantly impact the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us: - Watch our CEO, Hardik Jagda, share insights about Proximity. - Discover Proximity's values and meet some of our Proxonauts. - Explore our website, blog, and design wing - Studio Proximity. - Follow us on Instagram for behind-the-scenes content: @ProxWrks and @H.Jagda.,
Posted 1 week ago
2.0 - 6.0 years
3 - 7 Lacs
Gurugram
Work from Office
We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title Databricks Engineer Location [NCR / Bengaluru] Job Type [Full-time] Experience Level 4+ years in data engineering with a strong focus on Databricks Domain [Healthcare] Job Summary: We are seeking a highly skilled and motivated Databricks Engineer to join our data engineering team. The ideal candidate will have strong experience in designing, developing, and optimizing large-scale data pipelines and analytics solutions using the Databricks Unified Analytics Platform, Apache Spark, Delta Lake, Data Factory and modern data lake/lakehouse architectures. You will work closely with data architects, data scientists, and business stakeholders to enable high-quality, scalable, and reliable data processing frameworks that support business intelligence, advanced analytics, and machine learning initiatives. Key Responsibilities: Design and implement batch and real-time ETL/ELT pipelines using Databricks and Apache Spark. Ingest, transform, and deliver structured and semi-structured data from diverse data sources (e.g., file systems, databases, APIs, event streams). Develop reusable Databricks notebooks, jobs, and libraries for repeatable data workflows. Implement and manage Delta Lake solutions to support ACID transactions, time-travel, and schema evolution. Ensure data integrity through validation, profiling, and automated quality checks. Apply data governance principles, including access control, encryption, and data lineage, using available tools (e.g., Unity Catalog, external metadata catalogs). Work with data scientists and analysts to deliver clean, curated, and analysis-ready data. Profile and optimize Spark jobs for performance, scalability, and cost. Monitor, debug, and troubleshoot data pipelines and distributed processing issues. Set up alerting and monitoring for long-running or failed jobs. Participate in the CI/CD lifecycle using tools like Git, GitHub Actions, Jenkins, or Azure DevOps. Required Skills & Experience: 4+ years of experience in data engineering. Strong hands-on experience with Apache Spark (DataFrames, Spark SQL, RDDs, Structured Streaming). Proficient in Python (PySpark) and SQL for data processing and transformation. Understanding of Cloud environment (Azure & AWS). Solid understanding of Delta Lake, Data Factory and Lakehouse architecture. Experience working with various data formats such as Parquet, JSON, Avro, CSV. Familiarity with DevOps practices, version control (Git), and CI/CD pipelines for data workflows. Experience with data modeling, dimensional modeling, and data warehouse concepts.
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As an organization with over 26 years of experience in delivering Software Product Development, Quality Engineering, and Digital Transformation Consulting Services to Global SMEs & Large Enterprises, CES has established long-term relationships with leading Fortune 500 Companies across various industries such as Automotive, AgTech, Bio Science, EdTech, FinTech, Manufacturing, Online Retailers, and Investment Banks. These relationships, spanning over a decade, are built on our commitment to timely delivery of quality services, investments in technology innovations, and fostering a true partnership mindset with our customers. In our current phase of exponential growth, we maintain a consistent focus on continuous improvement and a process-oriented culture. To further support our accelerated growth, we are seeking qualified and committed individuals to join us and play an exceptional role. You can learn more about us at: http://www.cesltd.com/ Experience with Azure Synapse Analytics is a key requirement for this role. The ideal candidate should have hands-on experience in designing, developing, and deploying solutions using Azure Synapse Analytics, including a good understanding of its various components such as SQL pools, Spark pools, and Integration Runtimes. Proficiency in Azure Data Lake Storage is also essential, with a deep understanding of its architecture, features, and best practices for managing a large-scale Data Lake or Lakehouse in an Azure environment. Moreover, the candidate should have experience with AI Tools and LLMs (e.g. GitHub Copilot, Copilot, ChatGPT) for automating responsibilities related to the role. Knowledge of Avro and Parquet file formats is required, including experience in data serialization, compression techniques, and schema evolution in a big data environment. Prior experience working with data in a healthcare or clinical laboratory setting is highly desirable, along with a strong understanding of PHI, GDPR, HIPPA, and HITRUST regulations. Relevant certifications such as Azure Data Engineer Associate or Azure Synapse Analytics Developer Associate are highly desirable for this position. The essential functions of the role include designing, developing, and maintaining data pipelines for ingestion, transformation, and loading of data into Azure Synapse Analytics, as well as working on data models, SQL queries, stored procedures, and other artifacts necessary for data processing and analysis. Successful candidates should possess proficiency in relational databases such as Oracle, Microsoft SQL Server, PostgreSQL, MySQL/MariaDB, strong SQL skills, experience in building ELT pipelines and data integration solutions, familiarity with data modeling and warehousing concepts, and excellent analytical and problem-solving abilities. Effective communication and collaboration skills are also crucial for collaborating with cross-functional teams. If you are a dedicated professional with the required expertise and skills, we invite you to join our team and contribute to our continued success in delivering exceptional services to our clients.,
Posted 2 weeks ago
1.0 - 7.0 years
3 - 9 Lacs
Pune
Work from Office
Required Skills and Qualifications- Bachelor degree in Computer Science, Information Technology, or a related field. Hands on experience in data pipeline testing, preferably in a cloud environment. Strong experience with Google Cloud Platform services, especially BigQuery Proficient in working with Kafka, Hive, Parquet files, and Snowflake. Expertise in Data Quality Testing and metrics calculations for both batch and streaming data. Excellent programming skills in Python and experience with test automation. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills.
Posted 2 weeks ago
5.0 - 10.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:AWS Data EngineerExperience5-10 YearsLocation:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
Join us as a Data Engineer at Barclays, where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Data Engineer, you should have experience with hands-on experience in Pyspark and a strong knowledge of Dataframes, RDD, and SparkSQL. You should also have hands-on experience in developing, testing, and maintaining applications on AWS Cloud. A strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) is essential. Additionally, you should be able to design and implement scalable and efficient data transformation/storage solutions using Snowflake. Experience in data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc., is required. Familiarity with using DBT (Data Build Tool) with Snowflake for ELT pipeline development is necessary. Advanced SQL and PL SQL programming skills are a must. Experience in building reusable components using Snowflake and AWS Tools/Technology is highly valued. Exposure to data governance or lineage tools such as Immuta and Alation is an added advantage. Knowledge of Orchestration tools such as Apache Airflow or Snowflake Tasks is beneficial, and familiarity with Abinitio ETL tool is a plus. Some other highly valued skills may include the ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. A good understanding of infrastructure setup and the ability to provide solutions either individually or working with teams is essential. Knowledge of Data Marts and Data Warehousing concepts, along with good analytical and interpersonal skills, is required. Implementing Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build data movement strategy is also important. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Meet the needs of stakeholders/customers through specialist advice and support. - Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. - Likely to have responsibility for specific processes within a team. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - Demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Manage own workload, take responsibility for the implementation of systems and processes within own work area and participate in projects broader than the direct team. - Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. - Provide specialist advice and support pertaining to own work area. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how all teams in the area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. - Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative/operational expertise. - Make judgements based on practice and previous experience. - Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. - Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day-to-day administrative requirements. - Build relationships with stakeholders/customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
NTT DATA is looking for a Data Ingest Engineer to join the team in Pune, Mahrshtra (IN-MH), India (IN). As a Data Ingest Engineer, you will be part of the Ingestion team of the DRIFT data ecosystem, focusing on ingesting data in a timely, complete, and comprehensive manner using the latest technology available to Citi. Your role will involve leveraging new and creative methods for repeatable data ingestion from various sources while ensuring the highest quality data is provided to downstream partners. Responsibilities include partnering with management teams to integrate functions effectively, identifying necessary system enhancements for new products and process improvements, and resolving high impact problems/projects through evaluation of complex business processes and industry standards. You will provide expertise in applications programming, ensure application design aligns with the overall architecture blueprint, and develop standards for coding, testing, debugging, and implementation. Additionally, you will analyze issues, develop innovative solutions, and mentor mid-level developers and analysts. The ideal candidate should have 6-10 years of experience in Apps Development or systems analysis, with extensive experience in system analysis and programming of software applications. Proficiency in Application Development using JAVA, Scala, Spark, familiarity with event-driven applications and streaming data, and experience with various schema, data types, ELT methodologies, and formats are required. Experience working with Agile and version control tool sets, leadership skills, and clear communication abilities are also essential. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. With experts in more than 50 countries and a strong partner ecosystem, NTT DATA is committed to helping clients innovate, optimize, and transform for long-term success. As a part of the NTT Group, NTT DATA invests significantly in R&D to support organizations and society in moving confidently into the digital future. For more information, visit us at us.nttdata.com.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
The role of a Data Engineer is crucial for ensuring the smooth operation of the Data Platform in Azure / AWS Databricks. As a Data Engineer, you will be responsible for the continuous development, enhancement, support, and maintenance of data availability, data quality, performance enhancement, and stability of the system. Your primary responsibilities will include designing and implementing data ingestion pipelines from various sources using Azure Databricks, ensuring the efficient and smooth running of data pipelines, and adhering to security, regulatory, and audit control guidelines. You will also be tasked with driving optimization, continuous improvement, and efficiency in data processes. To excel in this role, it is essential to have a minimum of 5 years of experience in the data analytics field, hands-on experience with Azure/AWS Databricks, proficiency in building and optimizing data pipelines, architectures, and data sets, and excellent skills in Scala or Python, PySpark, and SQL. Additionally, you should be capable of troubleshooting and optimizing complex queries on the Spark platform, possess knowledge of structured and unstructured data design/modelling, data access, and data storage techniques, and expertise in designing and deploying data applications on cloud solutions such as Azure or AWS. Moreover, practical experience in performance tuning and optimizing code running in Databricks environment, demonstrated analytical and problem-solving skills, particularly in a big data environment, are essential for success in this role. In terms of technical/professional skills, proficiency in Azure/AWS Databricks, Python/Scala/Spark/PySpark, HIVE/HBase/Impala/Parquet, Sqoop, Kafka, Flume, SQL, RDBMS, Airflow, Jenkins/Bamboo, Github/Bitbucket, and Nexus will be advantageous for executing the responsibilities effectively.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough