Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The ideal candidate is expected to possess the following skillsets: - Ability to lead a technical team and provide mentorship and leadership to junior team members. - Experience in implementing modern data pipelines using Snowflake. - Strong experience in creating and managing all SQL Server database objects, including jobs, tables, views, indices, stored procedures, UDFs, and triggers. - Proficiency in building and supporting data sources for BI/Analytical tools such as SSRS and Power BI. - Expertise in all components of SQL Server, including Database Engine, SSIS, and SSRS. - Proficient in ETL and ELT processes within SQL Server or Azure. - Experience in implementing and supporting the Microsoft BI stack, including SSIS and SSAS.,
Posted 6 days ago
1.0 - 4.0 years
3 - 6 Lacs
Bengaluru, Karnataka, India
On-site
About Kazam:We are an agnostic EV charging software platform building Indias largest smart and affordable EV charging network Through our partnerships with fleets, CPOs, RWAs, and OEMs we have been able to create a robust charging network with over 7000 devices on our platform Kazam is enabling fleet companies, charge point operators, OEMs by providing affordable and complete software stack like white label template app (both android & iOS), API integration, load management solution & charger monitoring dashboard so that you can do hassle free business without worrying about technology (Please note that you can use both Kazam chargers and OCPP enabled charging points via our platform) Not only that, we are able to drive utilisation to your charging station leveraging Kazam s network for 50,000+ EV drivers Through our partnerships with Fleets, CPOs, RWAs and OEMs we have been able to create a robust charging network with over 11000+ devices on our platform Key ResponsibilitiesWork with analytics teams to ensure data is clean, structured, and accessible for analysis and reporting Implement data quality and governance frameworks to ensure data integrity across the organization Contribute to data exploration and analysis projects by delivering robust, reusable data pipelines that support deep data analysis Design, implement, and optimize scalable data architectures, including data lakes, data warehouses, and real-time streaming solutions Develop and maintain ETL/ELT pipelines to ensure efficient data flow from multiple sources Leverage automation to streamline data ingestion, processing, and integration tasks Develop and maintain scripts for data automation and orchestration, ensuring timely and accurate delivery of data products Work closely with DevOps and Cloud teams to ensure data infrastructure is secure, reliable, and scalable Qualifications & SkillsTechnical Skills:ETL/ELT: Proficient in building and maintaining ETL/ELT processes using tools such as Apache Airflow, DBT, Talend, or custom scripts in Python, SQL, NoSQL etc Analytics: Strong understanding of data analytics concepts, with experience in creating data models and working closely with BI/Analytics teams Automation: Hands-on experience with data automation tools (e g, Apache Airflow, Prefect) and scripting (Python, Shell, etc) to automate data workflows Data Architecture: Experience in designing and maintaining data lakes, warehouses, and real-time streaming architectures using technologies like AWS/GCP/Azure, Hadoop, Spark, Kafka, etc Soft Skills:Excellent problem-solving skills and ability to work independently and as part of a team Ability to collaborate cross-functionally with analytics, business intelligence, and product teams Strong communication skills with the ability to translate complex technical concepts for non-technical stakeholders Attention to detail and commitment to data quality and governance
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining the Analytics Engineering team at DAZN, where your primary responsibility will be transforming raw data into valuable insights that drive decision-making across various aspects of our global business. This includes content, product development, marketing strategies, and revenue generation. Your role will involve constructing dependable and scalable data pipelines and models to ensure that data is easily accessible and actionable for all stakeholders. As an Analytics Engineer with a minimum of 2 years of experience, you will play a crucial part in the construction and maintenance of our advanced data platform. Utilizing tools such as dbt, Snowflake, and Airflow, you will be tasked with creating well-organized, well-documented, and reliable datasets. This hands-on position is perfect for individuals aiming to enhance their technical expertise while contributing significantly to our high-impact analytics operations. Your key responsibilities will involve: - Developing and managing scalable data models through the use of dbt and Snowflake - Creating and coordinating data pipelines using Airflow or similar tools - Collaborating with various teams within DAZN to transform business requirements into robust datasets - Ensuring data quality through rigorous testing, validation, and monitoring procedures - Adhering to best practices in code versioning, CI/CD processes, and data documentation - Contributing to the enhancement of our data architecture and team standards We are seeking individuals with: - A minimum of 2 years of experience in analytics/data engineering or related fields - Proficiency in SQL and a solid understanding of cloud data warehouses (preference for Snowflake) - Familiarity with dbt for data modeling and transformation - Knowledge of Airflow or other workflow orchestration tools - Understanding of ELT processes, data modeling techniques, and data governance principles - Strong communication and collaboration skills Nice to have: - Previous experience in media, OTT, or sports technology sectors - Familiarity with BI tools such as Looker, Tableau, or Power BI - Exposure to testing frameworks like dbt tests or Great Expectations,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Engineer, Technical Lead at Zywave, you will play a crucial role in developing cutting-edge SaaS applications that disrupt and innovate our market space. Zywave is dedicated to continuous improvement and growth, and we are looking for individuals who can lead our Product Development team in supporting our major company initiatives. Your responsibilities will include taking ownership of development efforts, mentoring junior engineers, and collaborating with team members to develop, test, troubleshoot, and maintain Zywave's web-based applications. You will contribute to all aspects of the product development lifecycle, ensuring that our multi-tenant SaaS application remains best in class. To excel in this role, you should be able to develop rich client web applications using the latest .NET technologies, lead team members in sprint cycle planning and software development practices, and implement unit tests and build scripts for bug-free releases. Additionally, you should have a strong technical background in .NET development technologies, Microsoft SQL Server, Snowflake, and ELT processes. To be considered a strong fit for this position, you should possess a Bachelor's degree in information systems technology or computer science, along with at least 5 years of relevant experience. You should also demonstrate out-of-the-box thinking, problem-solving skills, and excellent communication abilities. Familiarity with Agile methodologies, web service programming, and Internet design methodologies is highly desirable. At Zywave, you will have the opportunity to work in a dynamic environment where you can learn, grow, and contribute to making Zywave the best in the business. If you are looking to be part of a team that values innovation, collaboration, and continuous improvement, Zywave is the place for you. Join Zywave, a leader in the insurtech industry, and be a part of a company that powers the modern insurance lifecycle. With over 15,000 insurers, agencies, and brokerages worldwide using Zywave solutions, you will be part of a team that accelerates digitalization, distribution, and profitability in the insurance sector. Learn more about Zywave at www.zywave.com.,
Posted 1 month ago
8.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Senior Data Engineer (Databricks, PySpark, SQL, Cloud Data Platforms, Data Pipelines) Job Summary Synechron is seeking a highly skilled and experienced Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains. Software Required Software & Tools: Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows) PySpark (Proficient in developing and optimizing Spark jobs) SQL (Advance proficiency in writing complex SQL queries and optimizing queries) Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows) Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud) Data Warehousing Solutions (Snowflake highly preferred) Preferred Software & Tools: Kafka or other streaming frameworks (e.g., Confluent, MQTT) CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI) DevOps practices for data workflows Programming LanguagesPython (Expert level), and familiarity with other languages like Java or Scala is advantageous Overall Responsibilities Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases. Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals. Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards. Optimize data processes for enhanced performance, reliability, and cost efficiency. Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation. Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput. Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredScala, Java Databases/Data Management: EssentialData modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred) Preferred NoSQL databases, Hadoop ecosystem Cloud Technologies: EssentialExperience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments PreferredCloud native data tools and architecture design Frameworks and Libraries: EssentialPySpark, Spark SQL, Kafka, Airflow PreferredStreaming frameworks, TensorFlow (for data prep) Development Tools and Methodologies: EssentialVersion control (Git), CI/CD pipelines, Agile methodologies PreferredDevOps practices in data engineering, containerization (Docker, Kubernetes) Security Protocols: Familiarity with data security, encryption standards, and compliance best practices Experience Minimum of 8 years of professional experience in Data Engineering or related roles Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL Practical experience in data modeling, data warehousing, and ETL/ELT workflows Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions Experience with data orchestration and automation tools is highly valued Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred) Day-to-Day Activities Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines Automate data workflows and improve pipeline efficiency through performance tuning and best practices Conduct data quality audits and ensure data security protocols are followed Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively Contribute to documentation, code reviews, and knowledge sharing within the team Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred) Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills with a focus on scalable solutions Excellent communication skills to effectively collaborate with technical and non-technical stakeholders Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines Demonstrated leadership in guiding team members and driving project success Adaptability to evolving technological landscapes and innovative thinking Commitment to data privacy, security, and ethical handling of information
Posted 1 month ago
2 - 4 years
4 - 6 Lacs
Hyderabad
Work from Office
Data Engineer Graph Research Data and Analytics What you will do Lets do this. Lets change the world. In this vital role you will be part Researchs Semantic Graph Team is seeking a qualified individual to design, build, and maintain solutions for scientific data that drive business decisions for Research. The successful candidate will construct scalable and high-performance data engineering solutions for extensive scientific datasets and collaborate with Research partners to address their data requirements. The ideal candidate should have experience in the pharmaceutical or biotech industry, leveraging their expertise in semantics, taxonomies, and linked data principles to ensure data harmonization and interoperability. Additionally, this individual should demonstrate robust technical skills, proficiency with data engineering technologies, and a thorough understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain semantic data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to standard processes for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. T Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 2- 4years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4- 6years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7- 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 4+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on big data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Understanding of data governance frameworks, tools, and standard processes Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40352 Jobs | Dublin
Wipro
19655 Jobs | Bengaluru
Accenture in India
18055 Jobs | Dublin 2
EY
16464 Jobs | London
Uplers
11953 Jobs | Ahmedabad
Amazon
10853 Jobs | Seattle,WA
Accenture services Pvt Ltd
10424 Jobs |
Bajaj Finserv
10110 Jobs |
Oracle
9702 Jobs | Redwood City
IBM
9556 Jobs | Armonk