Jobs
Interviews
3 Job openings at Magallenic Cloud
Data Engineer

Hyderabad, Bengaluru

5 - 10 years

INR 12.0 - 22.0 Lacs P.A.

Work from Office

Full Time

Experience Range : 4 - 12+ Year's Work Location : Bangalore (Proffered ) Must Have Skills : Airflow, big query, Hadoop, PySpark, Spark/Scala, Python, Spark - SQL, Snowflake, ETL, Data Modelling, Erwin OR Erwin Studio, Snowflake, Stored Procedure & Functions, AWS, Azure Databricks, Azure Data Factory. No Of Opening's : 10+ Job Description : We are having multiple Salesforce roles with our clients. Role 1 : Data Engineer Role 2 : Support Data Engineer Role 3 : ETL Support Engineer Role 4 : Senior Data Modeler Role 5 : Data Engineer Data Bricks Please find below the JD's for each role Role 1 : Data Engineer 5+ years of experience in data engineering or a related role. Proficiency in Apache Airflow for workflow scheduling and management. Strong experience with Hadoop ecosystems, including HDFS, MapReduce, and Hive. Expertise in Apache Spark/ Scala for large-scale data processing. Proficient in Python Advanced SQL skills for data analysis and reporting. Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) is a plus. Designs, proposes, builds, and maintains databases and datalakes, data pipelines that transform and model data, and reporting and analytics solutions Understands business problems and processes based on direct conversations with customers, can see the big picture, and translate that into specific solutions Identifies issues early, proposes solutions, and tactfully raises concerns and proposes solutions Participates in code peer reviews Articulates clearly pros/cons of various tools/approaches Documents and diagrams proposed solutions Role 2 : Support Data Engineer Prioritize and resolve Business-As-Usual (BAU) support queries within agreed Service Level Agreements (SLA) while ensuring application stability. Drive engineering delivery to reduce technical debt across the production environment, collaborating with development and infrastructure teams Perform technical analysis of the production platform to identify and address performance and resiliency issues Participate in the Software Development Lifecycle (SDLC) to improve production standards and controls Build and maintain the support knowledge database, updating the application runbook with known tasks and managing event monitoring Create health check monitors, dashboards, synthetic transactions and alerts to increase monitoring and observability of systems at scale. Participate in on-call rotation supporting application release validation, alert response, and incident management Collaborate with development, product, and customer success teams to identify and resolve technical problems. Research and implement recommendations from post-mortem analyses for continuous improvement. Document issue details and solutions in our ticketing system (JIRA and ServiceNow) Assist in creating and maintaining technical documentation, runbooks, and knowledge base articles Navigate a complex system, requiring deep troubleshooting/debugging skills and an ability to manage multiple contexts efficiently. Oversee the collection, storage, and maintenance of production data, ensuring its accuracy and availability for analysis. Monitor data pipelines and production systems to ensure smooth operation and quickly address any issues that arise. Implement and maintain data quality standards, conducting regular checks to ensure data integrity. Identify and resolve technical issues related to data processing and production systems. Work closely with data engineers, analysts, and other stakeholders to optimize data workflows and improve production efficiency. Contribute to continuous improvement initiatives by analyzing data to identify areas for process optimization Role 3 : ETL Support Engineer 6+ years of experience with ETL support and development ETL Tools: Experience with popular ETL tools like Talend, Microsoft SSIS, Experience with relational databases (e.g., SQL Server, Postgres). Experience with Snowflake Dataware house. Proficiency in writing complex SQL queries for data validation, comparison, and manipulation Familiarity with version control systems like Git, Github to manage changes in test cases and scripts. Knowledge of defect tracking tools like JIRA, ServiceNow. Banking domain experience is a must. Understanding of the ETL process Perform functional, Integration and Regression testing for ETL Processes. Validate and ensure data quality and consistency across different data sources and targets. Develop and execute test cases for ETL workflows and data pipeline. Load Testing: Ensuring that the data warehouse can handle the volume of data being loaded and queried under normal and peak conditions. Scalability: Testing for the scalability of the data warehouse in terms of data growth and system performance. Role 4 : Senior Data Modeler 7+ experience in metadata management, data modelling, and related tools (Erwin or ER Studio or others). Overall 10+ Experience in IT. Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional data platform technologies, and ETL and data ingestion). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Communication, and presentation skills. Help team to Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies develop the conceptual/logical/physical data models Define and govern data modelling and design standards, tools, best practices, and related development for enterprise data models. Hands-on modelling in modelling and mappings between source system data model and Datawarehouse data models. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks with respect to modelling and mappings. Hands on experience in writing complex SQL queries. Good to have experience in data modelling for NOSQL objects Role 5 : Data Engineer Data Bricks Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations.

Data Scientist

Hyderabad, Bengaluru

5 - 10 years

INR 14.0 - 24.0 Lacs P.A.

Hybrid

Full Time

Experience Range : 5 - 10+ Year's Work Location : Bangalore Must Have Skills : Pandas, Hypothesis testing, A/Btesting, feature engineering, statistical analysis, Machine Learning, NumPy, Python, SQL Good To Have Skills : Databricks Job Description : Develop, implement, and optimize machine learning models for predictive analytics and decision-making. Work with structured and unstructured data to extract meaningful insights and patterns. Utilize Python and standard data science libraries such as NumPy, Pandas, SciPy, Scikit-Learn, TensorFlow, PyTorch, and Matplotlib for data analysis and model building. Design and develop data pipelines for efficient processing and analysis. Conduct exploratory data analysis (EDA) to identify trends and anomalies. Collaborate with cross-functional teams to integrate data-driven solutions into business strategies. Use data visualization and storytelling techniques to communicate complex findings to non-technical stakeholders. Stay updated with the latest advancements in machine learning and AI technologies. Required Qualifications 5 years of hands-on experience in data science and machine learning. Strong proficiency in Python and relevant data science packages. Experience with machine learning frameworks such as TensorFlow, Keras, or PyTorch. Knowledge of SQL and database management for data extraction and manipulation. Expertise in statistical analysis, hypothesis testing, and feature engineering . Understanding of Marketing Mix Modeling is a plus. Experience in data visualization tools such as Matplotlib, Seaborn, or Plotly package Strong problem-solving skills and ability to work with large datasets. Excellent communication skills with a knack for storytelling using data. Preferred Qualifications Experience with cloud platforms such as AWS, Azure, or GCP. Knowledge of big data technologies like Hadoop, Spark, or Databricks. Exposure to NLP, computer vision, or deep learning techniques. Understanding of A/B testing and experimental design .

Java Full Stack Developer

Noida, Hyderabad, Bengaluru

4 - 9 years

INR 12.0 - 22.0 Lacs P.A.

Hybrid

Full Time

Experience Range : 4 - 10+ Year's Work Location : Hyderabad, Bangalore, Noida Must Have Skills : OOP, Core JAVA, RDBMS, JavaScript, Spring Boot, Microservices Architectures, React JS, Restful APIs Good To Have Skills : AWS/GCP/Azure, Maven / Gradle, Docker, CI/CD, ORM frameworks (Hibernate/JPA), Kubernates Job Description : We are looking for a proficient and experienced Java Full Stack Developer with strong expertise in Java , Spring ecosystem (Boot, Core, MVC, Data JPA). , Microservices, Kafka, Frontend - Angular / React/ javaScript/TypeScript , Cloud - AWS/Azure/GCP, CI/CD Tools to join our engineering team. The ideal candidate will be responsible for developing scalable web applications from front to back delivering intuitive UI experiences while also building secure and efficient backend services. You will play a key role in transforming complex business requirements into technical solutions, collaborating closely with cross-functional teams to deliver high-quality, end-to-end solutions in a fast-paced, agile environment. Key Responsibilities: Design and develop robust backend services using Java, Spring Boot, Spring Core, Spring MVC, and Spring Data JPA . Build and maintain scalable microservices architectures . Integrate and manage Apache Kafka for real-time data streaming and messaging. Develop responsive and dynamic front-end applications using Angular, React, JavaScript, and TypeScript . Collaborate with DevOps teams to implement and maintain CI/CD pipelines . Deploy and manage applications on cloud platforms such as AWS, Azure, or GCP . Write clean, maintainable, and efficient code following best practices and design patterns. Participate in code reviews, unit testing, and performance tuning. Work closely with cross-functional teams including Product Managers, QA, and UX/UI designers. Troubleshoot and resolve technical issues across the full stack. Stay up to date with emerging technologies and industry trends.

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview