Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 14.0 years
0 Lacs
telangana
On-site
As the Vice President of Engineering at Teradata, you will be responsible for leading the India-based software development organization within the AI Platform Group. Your main focus will be on executing the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases at scale. Success in this role will involve building a world-class engineering culture, attracting and retaining top technical talent, accelerating hybrid cloud-first product delivery, and driving innovation that brings measurable value to customers. You will be leading a team of over 150 engineers with the goal of helping customers achieve outcomes with Data and AI. Collaboration with Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be key aspects of your role. Additionally, you will work closely with a high-impact regional team of up to 500 people, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. To qualify for this position, you should have over 10 years of senior leadership experience in product development, engineering, or technology leadership within enterprise software product companies. You should also have at least 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. Experience in leading the development of agentic AI and scaling AI in a hybrid cloud environment is essential. Success in implementing and scaling Agile and DevSecOps methodologies, as well as modernizing legacy architectures into service-based systems, will be key qualifications. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, familiarity with modern data stack technologies, AI/ML infrastructure, enterprise security, data governance, and API-first design will be beneficial. Additionally, a track record of building high-performing engineering cultures, inclusive leadership teams, and a passion for open-source collaboration are desired qualities. A Masters degree in engineering, Computer Science, or an MBA is preferred for this role. At Teradata, we prioritize a people-first culture, embrace a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our dedication to fostering an equitable environment that celebrates individuals for all aspects of who they are.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your primary objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will be responsible for monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation. Additionally, you will provide user and operational support on applications to business users. You will utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments. Furthermore, you will recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. You will also consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. As the Applications Development Senior Programmer Analyst, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You will have the ability to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. In this role, you will appropriately assess risk when business decisions are made, demonstrate particular consideration for the firm's reputation, and safeguard Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. You will be required to have strong analytical and communication skills and must be results-oriented, willing, and able to take ownership of engagements. Additionally, experience in the banking domain is a must. Qualifications: Must Have: - 8+ years of application/software development/maintenance - 5+ years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop - Knowledge of Python, Java, or Scala programming language - Experience with JAVA, Web services, XML, Java Script, Micro services, SOA, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem - Ability to work independently, multi-task, and take ownership of various analyses or reviews Good to Have: - Work experience in Citi or Regulatory Reporting applications - Hands-on experience on cloud technologies, AI/ML integration, and creation of data pipelines - Experience with vendor products like Tableau, Arcadia, Paxata, KNIME - Experience with API development and use of data formats Education: - Bachelor's degree/University degree or equivalent experience This is a high-level overview of the job responsibilities and qualifications. Other job-related duties may be assigned as required.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer (A2), you will be responsible for designing and developing AI-driven data ingestion frameworks and real-time processing solutions to enhance data analysis and machine learning capabilities across the full technology stack. Your duties will include deploying, maintaining, and supporting application codes and machine learning models in production environments, ensuring seamless integration with front-end and back-end systems. You will also create and enhance AI solutions that enable seamless integration and flow of data across the data ecosystem, facilitating advanced analytics and insights for end users. Additionally, you will conduct business analysis to gather requirements and develop ETL processes, scripts, and machine learning pipelines that meet technical specifications and business needs. You will be tasked with developing real-time data ingestion and stream-analytic solutions using technologies such as Kafka, Apache Spark, Python, and cloud platforms to support AI applications. Utilizing multiple programming languages and tools, including Python, Spark, Hive, Presto, Java, and JavaScript frameworks, you will build prototypes for AI models and evaluate their effectiveness and feasibility. Furthermore, you will develop application systems adhering to standard software development methodologies, ensuring robust design, programming, backup, and recovery processes to deliver high-performance AI solutions across the full stack. As part of a team rotation, you will provide system support, collaborating with other engineers to resolve issues and enhance system performance regarding both front-end and back-end components. You will operationalize open-source AI and data-analytic tools for enterprise-scale applications, aligning them with organizational needs and user interfaces. Your responsibilities will also include ensuring compliance with data governance policies by implementing and validating data lineage, quality checks, and data classification in AI projects. You will need to understand and follow the company's software development lifecycle to effectively develop, deploy, and deliver AI solutions. Designing and developing AI frameworks leveraging open-source tools and advanced data processing frameworks, integrating them with user-facing applications, will also be part of your role. You will lead the design and execution of complex AI projects, ensuring alignment with ethical guidelines and principles under the guidance of senior team members. In terms of mandatory technical skills, you should have a strong proficiency in Python, Java, C++, and familiarity with machine learning frameworks such as TensorFlow and PyTorch. An in-depth knowledge of ML, Deep Learning, and NLP algorithms is essential, along with hands-on experience in building backend services using frameworks like FastAPI, Flask, or Django. Proficiency in front-end and back-end technologies, including JavaScript frameworks like React and Angular, is required to integrate user interfaces with AI models and data solutions. Developing and maintaining data pipelines for AI applications to ensure efficient data extraction, transformation, and loading processes is also a key aspect of the role. Moreover, possessing strong oral and written communication skills to effectively convey technical and non-technical concepts to peers and stakeholders is crucial. Preferred technical skills include utilizing big data technologies such as Azure Databricks and Apache Spark, developing real-time data ingestion and stream-analytic solutions leveraging various technologies, and holding relevant certifications like Microsoft Certified: Azure Data Engineer Associate or Azure AI Engineer. The ideal candidate for this role will have a Bachelor's or Master's degree in Computer Science and 2 to 4 years of Software Engineering experience. If you are open to collaborative learning, adept at managing project components beyond individual tasks, and strive to understand business objectives driving data needs, then this role is tailored for you.,
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
maharashtra
On-site
At PwC, the focus is on leveraging data to drive insights and make informed business decisions in the field of data and analytics. Utilizing advanced analytics techniques, you will assist clients in optimizing their operations and achieving strategic goals. As a data analyst at PwC, your primary tasks will involve extracting insights from large datasets and facilitating data-driven decision-making. Your role will require skills in data manipulation, visualization, and statistical modeling to help clients solve complex business problems effectively. With a minimum of 12 years of hands-on experience, the responsibilities associated with this position include: Architecture Design: - Designing and implementing scalable, secure, and high-performance architectures for Generative AI applications. - Integrating Generative AI models into existing platforms, focusing on compatibility and performance optimization. Model Development and Deployment: - Fine-tuning pre-trained generative models for domain-specific use cases. - Developing data collection, sanitization, and data preparation strategies for model fine-tuning. - Evaluating, selecting, and deploying appropriate Generative AI frameworks like PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, and Agentflow. Innovation and Strategy: - Staying updated with the latest advancements in Generative AI to recommend innovative applications for solving complex business problems. - Defining and executing the AI strategy roadmap while identifying key opportunities for AI transformation. Collaboration and Leadership: - Collaborating with cross-functional teams, including data scientists, engineers, and business stakeholders. - Mentoring and guiding team members on AI/ML best practices and architectural decisions. Performance Optimization: - Monitoring the performance of deployed AI models and systems to ensure robustness and accuracy. - Optimizing computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: - Ensuring compliance with ethical AI practices, data privacy regulations, and governance frameworks. - Implementing safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Required Skills: - Advanced programming skills in Python with fluency in data processing frameworks like Apache Spark. - Strong knowledge of LLMs foundational models and open-source models like Llama 3.2, Phi, etc. - Proven track record with event-driven architectures and real-time data processing systems. - Familiarity with Azure DevOps and LLMOps tools for operationalizing AI workflows. - Deep experience with Azure OpenAI Service, vector DBs, API integrations, prompt engineering, and model fine-tuning. - Knowledge of containerization technologies such as Kubernetes and Docker. - Comprehensive understanding of data lakes and data management strategies. - Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. - Proficiency in cloud computing platforms like Azure or AWS. - Exceptional leadership, problem-solving, and analytical abilities. - Strong communication and collaboration skills with experience managing high-performing teams. Nice to Have Skills: - Experience with additional technologies such as Datadog and Splunk. - Possession of relevant solution architecture certificates and continuous professional development in data engineering and GenAI. Professional and Educational Background: - Any graduate/BE/B.Tech/MCA/M.Sc/M.E/M.Tech/Masters Degree/MBA.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Data Science Manager in the Research and Development (R&D) team at our organization, you will play a crucial role in driving innovation through advanced machine learning and AI algorithms. Your primary responsibility will involve conducting applied research, development, and validation of cutting-edge algorithms to address complex real-world problems on a large scale. You will collaborate closely with the product team to gain insights into business challenges and product objectives, enabling you to devise creative algorithmic solutions. Your role will entail creating prototypes and demonstrations to validate new ideas and transform research findings into practical innovations by collaborating with AI Engineers and software engineers. In addition, you will be responsible for formulating and executing research plans, carrying out experiments, documenting and consolidating results, and potentially publishing your work. It will also be essential for you to safeguard intellectual property resulting from R&D endeavors by working with relevant teams and external partners. Furthermore, part of your role will involve mentoring junior staff to ensure adherence to established procedures and collaborating with various stakeholders, academic/research partners, and fellow researchers to deliver tangible outcomes. To excel in this position, you are required to possess a strong foundation in computer science principles and proficient skills in analyzing and designing AI/Machine learning algorithms. Practical experience in several key areas such as supervised and unsupervised machine learning, reinforcement learning, deep learning, knowledge-based systems, evolutionary computing, probabilistic graphical models, among others, is essential. You should also be adept in at least one programming language and have hands-on experience in implementing AI/machine learning algorithms using Python or R. Familiarity with tools, frameworks, and libraries like Jupyter/Zeppelin, scikit-learn, matplotlib, pandas, Tensorflow, Keras, Apache Spark, etc., will be advantageous. Ideally, you should have at least 2-5 years of applied research experience in solving real-world problems using AI/Machine Learning techniques. Additionally, having a publication in a reputable conference or journal related to AI/Machine Learning or holding patents in the field would be beneficial. Experience in contributing to open-source projects within the AI/Machine Learning domain will be considered a strong asset. If you are excited about this challenging opportunity, please refer to the Job Code DSM_TVM for the position based in Trivandrum. For further details, feel free to reach out to us at recruitment@flytxt.com.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Sciative is on a mission to create the future of dynamic pricing powered by artificial intelligence and big data. Our Software-as-a-Service products are used globally across various industries retail, ecommerce, travel, and entertainment. We are a fast growth-oriented startup, with an employee strength of 60+ employees based in a plush office at Navi Mumbai. With our amazing result-oriented product portfolio, we want to become the most customer-oriented company on the planet. To get there, we need exceptionally talented, bright, and driven people. If you'd like to help us build the place to find and buy anything online, this is your chance to make history. We are looking for a dynamic, organized self-starter to join our Tech Team. Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling. Monitor data flows and processes, identify and resolve data-related issues and bottlenecks. Contribute to the continuous improvement of data engineering practices and standards within the organization. Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing. Strong passion for data engineering, artificial intelligence, and problem-solving. Solid understanding of data engineering concepts, data modeling, and data integration techniques. Proficiency in programming languages such as Python, SQL and Web Scrapping. Understanding of databases like No Sql, relational database, In-Memory database, and technologies like MongoDB, Redis, Apache Spark would be add ons. Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus. Excellent analytical and problem-solving skills, with a keen eye for detail. Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment. Self-motivated, quick learner, and adaptable to changing priorities and technologies.,
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
2.0 - 9.0 years
0 Lacs
haryana
On-site
The role of a Java Developer at our company within the Digital & Information Office (DIO) team is crucial for driving the design, development, and enhancement of Operational Support Systems (OSS) applications. Your main responsibility will be to design, develop, and maintain high-quality, scalable Java applications that meet both business and technical requirements. Your expertise in Java technologies and development best practices will be essential in building efficient and reliable software solutions. By collaborating with cross-functional teams, you will contribute to technical design, troubleshoot and resolve issues, and ensure the performance and scalability of our applications. In this role, you will be tasked with ensuring that key programs and projects align with the OSS roadmap requirements in terms of time, quality, and cost. You will write efficient, reusable, and maintainable code in Java, implementing complex features and functionalities as needed for each project. Additionally, you will analyze and troubleshoot software issues, providing effective solutions to ensure application performance, reliability, and scalability. It will be crucial for you to stay updated with the latest Java technologies, tools, and industry trends, and advocate for improvements in development processes and practices. Furthermore, creating and maintaining clear and comprehensive technical documentation to support development activities and facilitate knowledge transfer within the team will also be part of your responsibilities. To be successful in this role, you must have a minimum of 2-9 years of experience in development, customization, configuration, and integration. You should have led the delivery of projects ranging from core-small to complex developmental projects from conception to deployment and operations support for OSS applications. Experience with implementing Docker, Docker Swarm based Architecture, and technical skills using Java, J2EE concepts, MVC frameworks (such as Struts, Hibernate, Spring, Spring Boot), Apache Spark, web service technologies (like Soap, Rest), and UI frameworks like React or Angular is essential. A good understanding of RDBMS databases like Oracle 10g/9i, MS SQL Server is also required. Ideally, you should hold a bachelor's or master's degree in computer science, software engineering, or a closely related field. We welcome women candidates who are looking to restart their professional journey after a career break. If you are passionate about Java development and possess the necessary skills and experience, we encourage you to apply for this exciting opportunity.,
Posted 3 weeks ago
12.0 - 16.0 years
0 Lacs
maharashtra
On-site
As an experienced professional with 12-14 years of experience, your primary role will involve developing a detailed project plan encompassing tasks, timelines, milestones, and dependencies. You will be responsible for solutions architecture design and implementation, understanding the source, and outlining the ADF structure. Your expertise will be crucial in designing and scheduling packages using ADF. Facilitating collaboration and communication within the team is essential to ensure a smooth workflow. You will also be focusing on application performance optimization and monitoring resource allocation to ensure tasks are adequately staffed. It will be part of your responsibility to create detailed technical specifications, business requirements, and unit test report documents. Your role will require you to ensure that the project complies with best practices, coding standards, and technical requirements. Collaboration with technical leads to address technical issues and mitigate risks will be a key aspect of your job. Your primary skill set should revolve around Data Architecture, with additional expertise in Data Modeling, ETL, Azure Log Analytics, Analytics Architecture, BI & Visualization Architecture, Data Engineering, Costing Management, databricks, Datadog, Apache Spark, Azure Datalake, and Azure Data Factory. Your proficiency in these areas will be instrumental in successfully executing your responsibilities.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a skilled PySpark Data Engineer, you will be responsible for designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensure data quality, and integrate with diverse data sources. Your role will involve developing, testing, and optimizing PySpark applications to process, transform, and analyze large-scale datasets from various sources such as relational databases, NoSQL databases, batch files, and real-time data streams. You will collaborate with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions. Your key responsibilities will include designing efficient data transformation and aggregation processes, developing error handling mechanisms for data integrity, optimizing PySpark jobs for performance, and working with distributed datasets in Spark. Additionally, you will design and implement ETL processes to ingest and integrate data from multiple sources, ensuring consistency, accuracy, and performance. You should have a Bachelor's degree in Computer Science or a related field, along with 5+ years of hands-on experience in big data development. Proficiency in PySpark, Apache Spark, and ETL development tools is essential for this role. To succeed in this position, you should have a strong understanding of data processing principles, techniques, and best practices in a big data environment. You must possess excellent analytical and problem-solving skills, with the ability to translate business requirements into technical solutions. Strong communication and collaboration skills are also crucial for effectively working with data analysts, data architects, and other team members. If you are looking to drive the development of robust data processing and transformation solutions within a fast-paced, data-driven environment, this role is ideal for you.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer II in the Deprecation Accelerator scope, you will be responsible for designing and maintaining scalable data pipelines both on-premises and on the cloud. Your role will involve understanding input and output data sources, managing upstream and downstream dependencies, and ensuring data quality. A crucial aspect of your job will be to focus on deprecating migrated workflows and migrating workflows to new systems when necessary. The ideal candidate will have expertise in tools like Git, Apache Airflow, Apache Spark, SQL, data migration, and data validation. Your key responsibilities will include: Workflow Deprecation: - Planning and executing the deprecation of migrated workflows by assessing their dependencies and consumption. - Utilizing tools and best practices to identify, mark, and communicate deprecated workflows to stakeholders. Data Migration: - Planning and executing data migration tasks to transfer data between different storage systems or formats. - Ensuring data accuracy and completeness during the migration processes. - Implementing strategies to accelerate data migration by backfilling, validating, and preparing new data assets for use. Data Validation: - Defining and implementing data validation rules to guarantee data accuracy, completeness, and reliability. - Using data validation solutions and anomaly detection methods to monitor data quality. Workflow Management: - Leveraging Apache Airflow to schedule, monitor, and automate data workflows. - Developing and managing Directed Acyclic Graphs (DAGs) in Airflow to orchestrate complex data processing tasks. Data Processing: - Creating and maintaining data processing scripts using SQL and Apache Spark. - Optimizing data processing for performance and efficiency. Version Control: - Utilizing Git for version control, collaborating with the team to manage the codebase and track changes. - Ensuring adherence to best practices in code quality and repository management. Continuous Improvement: - Staying updated with the latest advancements in data engineering and related technologies. - Continuously enhancing and refactoring data pipelines, tooling, and processes to improve performance and reliability. Skills and Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field. - Proficiency in Git for version control and collaborative development. - Strong knowledge of SQL and experience with database technologies. - Experience with data pipeline tools like Apache Airflow. - Proficiency in Apache Spark for data processing and transformation. - Familiarity with data migration and validation techniques. - Understanding of data governance and security practices. - Strong problem-solving skills and the ability to work both independently and in a team. - Excellent communication skills to collaborate with a global team. - Ability to thrive in a high-performing team environment.,
Posted 3 weeks ago
15.0 - 21.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Architect with over 15 years of experience, your primary responsibility will be to lead the design and implementation of scalable, secure, and high-performing data architectures. You will collaborate with business, engineering, and product teams to develop robust data solutions that support business intelligence, analytics, and AI initiatives. Your key responsibilities will include designing and implementing enterprise-grade data architectures using cloud platforms such as AWS, Azure, or GCP. You will lead the definition of data architecture standards, guidelines, and best practices while architecting scalable data solutions like data lakes, data warehouses, and real-time streaming platforms. Collaborating with data engineers, analysts, and data scientists, you will ensure optimal solutions are delivered based on data requirements. In addition, you will oversee data modeling activities encompassing conceptual, logical, and physical data models. It will be your duty to ensure data security, privacy, and compliance with relevant regulations like GDPR and HIPAA. Defining and implementing data governance strategies alongside stakeholders and evaluating data-related tools and technologies are also integral parts of your role. To excel in this position, you should possess at least 15 years of experience in data architecture, data engineering, or database development. Strong experience in architecting data solutions on major cloud platforms like AWS, Azure, or GCP is essential. Proficiency in data management principles, data modeling, ETL/ELT pipelines, and modern data platforms/tools such as Snowflake, Databricks, and Apache Spark is required. Familiarity with programming languages like Python, SQL, or Java, as well as real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub, will be beneficial. Moreover, experience in implementing data governance, data cataloging, and data quality frameworks is important. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills are necessary for this role. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is preferred, along with certifications like Cloud Architect or Data Architect (AWS/Azure/GCP). Join us at Infogain, a human-centered digital platform and software engineering company, where you will have the opportunity to work on cutting-edge data and AI projects in a collaborative and inclusive work environment. Experience competitive compensation and benefits while contributing to experience-led transformation for our clients in various industries.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You are a skilled QA / Data Engineer with 3-5 years of experience, joining a team focused on ensuring the quality and reliability of data-driven applications. Your expertise lies in manual testing and SQL, with additional knowledge in automation and performance testing being highly valuable. Your responsibilities include performing thorough testing and validation to guarantee the integrity of the applications. Your must-have skills include extensive experience in manual testing within data-centric environments, strong SQL skills for data validation and querying, familiarity with data engineering concepts such as ETL processes, data pipelines, and data warehousing, experience in Geo-Spatial data, a solid understanding of QA methodologies and best practices for software and data testing, and excellent communication skills. It would be beneficial for you to have experience with automation testing tools and frameworks like Selenium and JUnit for data pipelines, knowledge of performance testing tools such as JMeter and LoadRunner for evaluating data systems, familiarity with data engineering tools and platforms like Apache Kafka, Apache Spark, and Hadoop, understanding of cloud-based data solutions like AWS, Azure, and Google Cloud, along with their testing methodologies. Your proficiency in SQL, JUnit, Azure, Google Cloud, communication skills, performance testing, Selenium, QA methodologies, Apache Spark, data warehousing, data pipelines, cloud-based data solutions, Apache Kafka, Geo-Spatial data, JMeter, data validation, automation testing, manual testing, AWS, ETL, Hadoop, LoadRunner, ETL processes, and data engineering will be crucial in excelling in this role.,
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
About Prospecta Founded in 2002 in Sydney, Australia, with additional offices in India, North America, Canada, and a local presence in Europe, the UK, and Southeast Asia, Prospecta is dedicated to providing top-tier data management and automation software for enterprise clients. Our journey began with a mission to offer innovative solutions, leading us to become a prominent data management software company over the years. Our flagship product, MDO (Master Data Online), is an enterprise Master Data Management (MDM) platform designed to streamline data management processes, ensuring accurate, compliant, and relevant master data creation, as well as efficient data disposal. With a strong presence in asset-intensive industries such as Energy and Utilities, Oil and Gas, Mining, Infrastructure, and Manufacturing, we have established ourselves as a trusted partner in the field. Culture at Prospecta At Prospecta, our culture is centered around growth and embracing new challenges. We boast a passionate team that collaborates seamlessly to deliver value to our customers. Our diverse backgrounds create an exciting work environment that fosters a rich tapestry of perspectives and ideas. We are committed to nurturing an environment that focuses on both professional and personal development. Career progression at Prospecta is not just about climbing the corporate ladder but about encountering a continuous stream of meaningful opportunities that enhance personal growth and technical proficiency, all under the guidance of exceptional leaders. Our organizational structure emphasizes agility, responsiveness, and achieving tangible outcomes. If you thrive in a dynamic environment, enjoy taking on various roles, and are willing to go the extra mile to achieve goals, Prospecta is the ideal workplace for you. We continuously push boundaries while maintaining a sense of fun and celebrating victories, both big and small. About the Job Position: Jr. Platform Architect/ Sr. Backend Developer Location: Gurgaon Role Summary: In this role, you will be responsible for implementing technology solutions in a cost-effective manner by understanding project requirements and effectively communicating them to all stakeholders and facilitators. Key Responsibilities - Collaborate with enterprise architects, data architects, developers & engineers, data scientists, and information designers to identify and define necessary data structures, formats, pipelines, metadata, and workload orchestration capabilities. - Possess expertise in service architecture, development, and ensuring high performance and scalability. - Demonstrate experience in Spark, Elastic Search, SQL performance tuning, and optimization. - Showcase proficiency in architectural design and development of large-scale data platforms and data applications. - Hands-on experience with AWS, Azure, and OpenShift. - Deep understanding of Spark and its internal architecture. - Expertise in designing and building new Cloud Data platforms and optimizing them at the organizational level. - Strong hands-on experience in Big Data technologies such as Hadoop, Sqoop, Hive, and Spark, including DevOps. - Solid SQL (Hive/Spark) skills and experience in tuning complex queries. Must-Have - 7+ years of experience. - Proficiency in Java, Spring Boot, Apache Spark, AWS, OpenShift, PostgreSQL, Elastic Search, Message Queue, Microservice architecture, and Spark. Nice-to-Have - Knowledge of Angular, Python, Scala, Azure, Kafka, and various file formats like Parquet, AVRO, CSV, JSON, Hadoop, Hive, and HBase. What will you get Growth Path At Prospecta, your career journey is filled with growth and opportunities. Depending on your career trajectory, you can kickstart your career or accelerate your professional development in a dynamic work environment. Your success is our priority, and as you exhibit your abilities and achieve results, you will have the opportunity to quickly progress into leadership roles. We are dedicated to helping you enhance your experience and skills, providing you with the necessary tools, support, and opportunities to reach new heights in your career. Benefits - Competitive salary. - Health insurance. - Paid time off and holidays. - Continuous learning and career progression. - Opportunities to work onsite at various office locations and/or client sites. - Participation in annual company events and workshops.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
About Mindstix Software Labs: Mindstix accelerates digital transformation for the world's leading brands. We are a team of passionate innovators specialized in Cloud Engineering, DevOps, Data Science, and Digital Experiences. Our UX studio and modern-stack engineers deliver world-class products for our global customers that include Fortune 500 Enterprises and Silicon Valley startups. Our work impacts a diverse set of industries - eCommerce, Luxury Retail, ISV and SaaS, Consumer Tech, and Hospitality. A fast-moving open culture powered by curiosity and craftsmanship. A team committed to bold thinking and innovation at the very intersection of business, technology, and design. That's our DNA. Roles and Responsibilities: Mindstix is looking for a proficient Data Engineer. You are a collaborative person who takes pleasure in finding solutions to issues that add to the bottom line. You appreciate technical work by hand and feel a sense of ownership. You require a keen eye for detail, work experience as a data analyst, and in-depth knowledge of widely used databases and technologies for data analysis. Your responsibilities include: - Building outstanding domain-focused data solutions with internal teams, business analysts, and stakeholders. - Applying data engineering practices and standards to develop robust and maintainable solutions. - Being motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases. - Being a natural problem-solver and intellectually curious across a breadth of industries and topics. - Being acquainted with different aspects of Data Management like Data Strategy, Architecture, Governance, Data Quality, Integrity & Data Integration. - Being extremely well-versed in designing incremental and full data load techniques. Qualifications and Skills: - Bachelors or Master's degree in Computer Science, Information Technology, or allied streams. - 2+ years of hands-on experience in the data engineering domain with DWH development. - Must have experience with end-to-end data warehouse implementation on Azure or GCP. - Must have SQL and PL/SQL skills, implementing complex queries and stored procedures. - Solid understanding of DWH concepts such as OLAP, ETL/ELT, RBAC, Data Modelling, Data Driven Pipelines, Virtual Warehousing, and MPP. - Expertise in Databricks - Structured Streaming, Lakehouse Architecture, DLT, Data Modeling, Vacuum, Time Travel, Security, Monitoring, Dashboards, DBSQL, and Unit Testing. - Expertise in Snowflake - Monitoring, RBACs, Virtual Warehousing, Query Performance Tuning, and Time Travel. - Understanding of Apache Spark, Airflow, Hudi, Iceberg, Nessie, NiFi, Luigi, and Arrow (Good to have). - Strong foundations in computer science, data structures, algorithms, and programming logic. - Excellent logical reasoning and data interpretation capability. - Ability to interpret business requirements accurately. - Exposure to work with multicultural international customers. - Experience in the Retail/ Supply Chain/ CPG/ EComm/Health Industry is a plus. Who Fits Best - You are a data enthusiast and problem solver. - You are a self-motivated and fast learner with a strong sense of ownership and drive. - You enjoy working in a fast-paced creative environment. - You appreciate great design, have a strong sense of aesthetics and have a keen eye for detail. - You thrive in a customer-centric environment with the ability to actively listen, empathize and collaborate with globally distributed teams. - You are a team player who desires to mentor and inspire others to do their best. - You love expressing ideas and articulating well with strong written and verbal English communication and presentation skills. - You are detail-oriented with an appreciation for craftsmanship. Benefits: - Flexible working environment. - Competitive compensation and perks. - Health insurance coverage. - Accelerated career paths. - Rewards and recognition. - Sponsored certifications. - Global customers. - Mentorship by industry leaders. Location: This position is primarily based at our Pune (India) headquarters, requiring all potential hires to work from this location. A modern workplace is deeply collaborative by nature, while also demanding a touch of flexibility. We embrace deep collaboration at our offices with reasonable flexi-timing and hybrid options to our seasoned team members. Equal Opportunity Employer.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be joining Lifesight as a Data Engineer in our Bengaluru office, playing a pivotal role in the Data and Business Intelligence organization. Your primary focus will be on leading deep data engineering projects and contributing to the growth of our data platform team. This is an exciting opportunity to shape our technical strategy and foster a strong data engineering team culture in India. As a Data Engineer at Lifesight, you will be responsible for designing and constructing data platforms and services, managing data infrastructure in cloud environments, and enabling strategic business decisions across Lifesight products. Your role will involve building highly scalable, fault-tolerant distributed data processing systems, optimizing data quality in pipelines, and owning data mapping, transformations, and business logic. You will also engage in low-level system debugging, performance optimization, and actively participate in architecture discussions to drive new projects forward. The ideal candidate for this position will possess proficiency in Python and PySpark, along with a deep understanding of Apache Spark, Spark tuning, and building data frames. Experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, and Airflow, as well as NoSQL databases and cloud platforms like AWS and GCP, are essential. You should have at least 5 years of professional experience in data or software engineering, demonstrating expertise in data quality, data engineering, and various big data frameworks and tools. In summary, as a Data Engineer at Lifesight, you will have the opportunity to work on cutting-edge data projects, collaborate with a talented team of engineers, and contribute to the ongoing success and innovation of Lifesight's data platform.,
Posted 3 weeks ago
3.0 - 8.0 years
0 Lacs
karnataka
On-site
You should have 3 to 8 years of experience and be located in Bangalore. As a full-stack developer, you must demonstrate an understanding of system design, data structures, and algorithms. Your software design and development expertise should encompass languages such as C, C++, Python, or Rust. Additionally, you should have experience in building software tools and automating tasks using various scripts, tools, and services. Proficiency in Docker or other container technologies for cloud operations (preferably AWS) is essential. Your experience should also include working with databases like Postgres SQL and Mongo, as well as familiarity with data orchestration frameworks such as Apache Spark. Skills in C++ testing, GITLAB, Gtest, AWS, Linux, Python, and Docker are required for this role. Knowledge of GDB, C++17, GCC, and Kubernetes will be considered as supported skills.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
Quale Infotech is a company focused on Intelligent Process Automation (IPA), AI and other new-age technologies. Our AI innovation hub is one of the leading research sites for new age technologies. IPA practice at Quale Infotech is one of the largest and most respected with experts having decades of experience. Your role will be to be a pro-active member of our AI team working with on cutting-edge data science & analytics projects. This role involves leveraging machine learning, artificial intelligence, and big data technologies to build intelligent analytics solutions that drive business insights and decision-making. Key Responsibilities: Design, develop, and implement advanced machine learning models and AI-driven features for an analytics platform. Work with large-scale structured and unstructured data to extract meaningful insights and improve predictive capabilities. Collaborate with software engineers and product teams to integrate AI solutions into the analytics platform. Optimize model performance, scalability, and real-time processing capabilities. Develop and maintain data pipelines for efficient data processing and transformation. Research and stay updated with the latest advancements in AI, machine learning, and data science techniques. Provide technical leadership in best practices for AI/ML model deployment, monitoring, and lifecycle management. Required Skills & Qualifications: Minimum Masters in Data Science, Machine Learning, Computer Science, Mathematics, or a related field. Profeciency in RAG Proficiency in Python, R, or Scala and ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Experience with big data technologies such as Apache Spark, Hadoop, or Databricks. Strong understanding of NLP, deep learning, reinforcement learning, and AI algorithms. Strong problem-solving and analytical skills with a passion for AI-driven innovation.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
About Logik Are you driven to innovate Are you energized by the excitement of building a high-growth startup with winning technology and proven product-market fit Are you looking to join a team of A-players who keep customers first and take their work but not themselves seriously Logik was founded in 2021 by the godfathers of CPQ our CEO Christopher Shutts and our Executive Chairman Godard Abel, who together co-founded BigMachines, the first-ever CPQ technology vendor, in the early 2000s. Today, were reimagining what CPQ can and should be with our composable, AI-enabled platform that provides advanced configuration, transaction management, guided selling, and more. Were a well-funded and fast-growing startup disrupting the CPQ space, with founders that created the category and a platform thats pushing boundaries in configure-price-quote and complex commerce. We're looking for an exceptional AI Backend Engineer to join our Bangalore team and help us build the next generation of AI-powered solutions. Position Summary: As an Senior Backend Engineer AI & ML Specialization Engineer, you will play a crucial role in designing and developing scalable, high-performance backend systems that support our AI models and data pipelines. You will work closely with data scientists, machine learning engineers, and other backend developers to ensure our platform delivers reliable, real-time insights and predictions. Key Responsibilities: Design and develop robust, scalable backend services and APIs that handle large volumes of data and traffic. Implement data ingestion and processing pipelines to efficiently collect, store, and transform data for AI models. Develop and maintain efficient data storage solutions, including databases and data warehouses. Optimize backend systems for performance, scalability, and security. Collaborate with data scientists and machine learning engineers to integrate AI models into backend infrastructure. Collaborate with Devops to implement ML Ops and integrate the models and data engineering pipelines into highly available and reliable tech stacks. Troubleshoot and resolve technical issues related to backend systems and data pipelines. Stay up-to-date with the latest advancements in backend technologies and AI. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 6+ years of experience in backend development, with a focus on machine learning. Strong proficiency in Python and experience with popular frameworks such as Flask, Django, or FastAPI. Experience with SQL and NoSQL databases such as PostgreSQL, MySQL, MongoDB, or Redis. Experience with cloud platforms such as AWS, Azure, or GCP. Knowledge of date engineering, data pipelines and data processing frameworks such as Apache Airflow, Apache Spark, or Dask. Knowledge of ML Ops frameworks such as Kubeflow and experience with containerisation technologies such as Docker and Kubernetes. Knowledge of distributed computing and parallel programming. Excellent communication and problem-solving skills. Ability to work independently and as part of a team. Preferred Skills: Understanding of AI concepts and machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus. 3 + Years of experience with Java or Go is a plus. Experience with real-time data processing and streaming technologies. What We Offer: Competitive salary and benefits package. Opportunity to work on cutting-edge AI projects. Collaborative and supportive work environment. Continuous learning and professional development opportunities.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining Marriott Tech Accelerator, a part of Marriott International, a prominent global leader in hospitality. Marriott International, Inc. boasts a comprehensive portfolio of lodging brands worldwide, encompassing hotels and residential properties across 141 countries and territories. As a Production Support Software Engineer on the Data Privacy engineering team, your primary responsibility will be ensuring the proper utilization of Marriott's customer data throughout the organization, fostering trust among customers in Marriott's handling of their personal information. You will be instrumental in developing enterprise solutions that align with global privacy regulations and data governance best practices, utilizing Marriott's Modern Data Platform cloud-based technologies. The ideal candidate for this role will exhibit a deep passion for data and a relentless pursuit of success. Your role as an expert Production Support Software Engineer will be pivotal in guaranteeing the accurate, complete, and timely processing of customer data. Your focus will be on data privacy and governance processes in a production environment, with key activities including onboarding new client systems, monitoring process health, troubleshooting production issues, and supporting ad-hoc privacy and data governance requests. Key Responsibilities: Technical Leadership: - Mentor team members and peers - Collaborate with multiple teams to ensure successful task completion - Identify opportunities for process enhancements Delivering Technology: - Conduct analyses for service delivery processes - Ensure expected deliverables are met - Coordinate with application teams - Provide consultation for systems development - Coordinate deployment and production support activities IT Governance: - Adhere to IT standards and processes - Maintain balance between business and operational risk - Escalate risks when necessary - Follow project management standards Service Provider Management: - Validate project plans developed by Service Providers - Plan resource utilization effectively - Monitor service provider outcomes - Resolve service delivery problems promptly Management Competencies: - Leadership - Execution Management - Relationship Building - Talent Development - Professional Expertise Application Skills and Experience: Must Have: - 2+ years of code enhancement experience in an enterprise environment - 2-4 years of SQL database querying experience - 2-4 years of data application development experience using AWS, Lambda, Apache Spark, Python, Airflow, and RDBMS Important attributes: - Sense of urgency in issue resolution - Ability to understand end-to-end business processes - Strong communication and analytical skills - Attention to detail and ability to connect information - Proficiency in root-cause analysis - Independent work capability Nice to have: - Snowflake experience or certification - Agile methodology experience - AWS Solutions Architect or Developer certification Education and Certifications: Technical degree in Information Systems, Computer Science, or related engineering domains. Work location: Hyderabad, India. Work mode: Hybrid,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be joining Coders Brain Technology Pvt. Ltd., a global leader in services, digital, and business solutions. At Coders Brain, we partner with our clients to simplify, strengthen, and transform their businesses. We are committed to providing the highest levels of certainty and satisfaction through our comprehensive industry expertise and global network of innovation and delivery centers. As a Data Engineer with a minimum of 5 years of experience, you will be working remotely. Your role will involve collaborating with other developers to define and refine solutions. You will work closely with the business to deliver data and analytics projects. Your responsibilities will include working on data integration with various tools such as Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in AWS Cloud environment. You should have strong real-life experience in Python development, especially in pySpark within AWS Cloud. Designing, developing, testing, deploying, maintaining, and improving data integration pipelines will be a key part of your role. Additionally, you should have experience with Python and common libraries, Perl, Unix Scripts, and analytical skills with databases. Proficiency in source control systems like Git, Bitbucket, and continuous integration tools like Jenkins is required. Experience with continuous deployment (CI/CD), Databricks, Airflow, and Apache Spark will be beneficial. Knowledge of databases such as Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar is essential. Exposure to ETL tools including Informatica is preferred. A degree in Computer Science, Computer Engineering, or Electrical Engineering is desired. If you are interested in this opportunity, click on the apply button. Alternatively, you can send your resume to prerna.jain@codersbrain.com or pooja.gupta@codersbrain.com.,
Posted 3 weeks ago
2.0 - 5.0 years
5 - 11 Lacs
Chennai
Hybrid
Job Posting: Support Analyst Big Data & Application Support (Chennai) Location: Chennai, India (Prefers only chennai based candidates ) Experience: 2 to 5 years Employment Type: Full-Time | Hybrid Model Department: Digital Technology Services IT Digital Function: DaaS (Data as a Service), AI & RPA Support * Note : Only candidates with above criteria will be contacted for further process ! Role Overview We are looking for a Support Analyst to join our dynamic DTS IT Digital team in Chennai. In this role, you will support and maintain data platforms, AI/RPA systems, and big data ecosystems. You'll play a key part in production support, rapid incident recovery, and platform improvements, working with global stakeholders. Key Responsibilities Serve as L2/L3 support and point of contact for global support teams Perform detailed root cause analysis (RCA) and prevent incident recurrence Maintain, monitor, and support big data platforms and ETL tools Coordinate with multiple teams for incident and change management Contribute to disaster recovery planning, resiliency events, and capacity management Document support processes, fixes, and participate in monthly RCA reviews Technical Skills Required Proficient in Unix/Linux command line, basic Windows server operations Hands-on with big data and ETL tools such as: Hadoop, MapR, HDFS, Spark, Apache Drill, Yarn, Oozie Ab Initio, Alteryx, Spotfire Strong SQL skills and understanding of data processing Familiarity with problem/change/incident management processes Good scripting knowledge (Shell/Python – optional but preferred) What We’re Looking For Bachelor's degree in Computer Science, IT, or a related field 2 to 5 years of experience in application support or big data platform support Ability to communicate technical issues clearly to non-technical stakeholders Strong problem-solving skills and a collaborative mindset Experience in banking, financial services, or enterprise-grade systems is a plus Why Join Us? Be part of a global innovation and technology team Opportunity to work on AI, RPA, and large-scale data platforms Hybrid work culture with strong global collaboration Career development in a stable and inclusive banking giant Ready to Apply? If you're a passionate technologist with strong support experience and big data platform knowledge, we want to hear from you!
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Kolkata
Work from Office
As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 3 weeks ago
5.0 - 10.0 years
3 - 5 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Azure Databricks Developer Job Title: Azure Databricks Developer Experience: 5+ Years Location: PAN India (Remote/Hybrid as per project requirement) Employment Type: Full-time Job Summary: We are hiring an experienced Azure Databricks Developer to join our dynamic data engineering team. The ideal candidate will have strong expertise in building and optimizing big data solutions using Azure Databricks, Spark, and other Azure data services. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Databricks and Apache Spark. Integrate and manage large datasets using Azure Data Lake, Azure Data Factory, and other Azure services. Implement Delta Lake for efficient data versioning and performance optimization. Collaborate with cross-functional teams including data scientists and BI developers. Ensure best practices for data security, governance, and compliance. Monitor performance and troubleshoot Spark clusters and data pipelines. Skills & Requirements: Minimum 5 years of experience in data engineering with at least 2+ years in Azure Databricks. Proficiency in Apache Spark (PySpark/Scala). Strong hands-on experience with Azure services ADF, ADLS, Synapse Analytics. Expertise in building and managing ETL/ELT pipelines. Strong SQL skills and experience with performance tuning. Experience with CI/CD pipelines and Azure DevOps is a plus. Good understanding of data modeling, partitioning, and data lake architecture. Role & responsibilities Preferred candidate profile
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough