Home
Jobs

13457 Etl Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Omni's team is passionate about Commerce and Digital Transformation. We've been successfully delivering Commerce solutions for clients across North America, Europe, Asia, and Australia. The team has experience executing and delivering projects in B2B and B2C solutions. Job Description This is a remote position. We are seeking a Senior Data Engineer to architect and build robust, scalable, and efficient data systems that power AI and Analytics solutions. You will design end-to-end data pipelines, optimize data storage, and ensure seamless data availability for machine learning and business analytics use cases. This role demands deep engineering excellence balancing performance, reliability, security, and cost to support real-world AI applications. Key Responsibilities Architect, design, and implement high-throughput ETL/ELT pipelines for batch and real-time data processing. Build cloud-native data platforms : data lakes, data warehouses, feature stores. Work with structured, semi-structured, and unstructured data at petabyte scale. Optimize data pipelines for latency, throughput, cost-efficiency, and fault tolerance. Implement data governance, lineage, quality checks, and metadata management. Collaborate closely with Data Scientists and ML Engineers to prepare data pipelines for model training and inference. Implement streaming data architectures using Kafka, Spark Streaming, or AWS Kinesis. Automate infrastructure deployment using Terraform, CloudFormation, or Kubernetes operators. Requirements 7+ years in Data Engineering, Big Data, or Cloud Data Platform roles. Strong proficiency in Python and SQL. Deep expertise in distributed data systems (Spark, Hive, Presto, Dask). Cloud-native engineering experience (AWS, GCP, Azure) : BigQuery, Redshift, EMR, Databricks, etc. Experience designing event-driven architectures and streaming systems (Kafka, Pub/Sub, Flink). Strong background in data modeling (star schema, OLAP cubes, graph databases). Proven experience with data security, encryption, compliance standards (e.g., GDPR, HIPAA). Preferred Skills Experience in MLOps enablement : creating feature stores, versioned datasets. Familiarity with real-time analytics platforms (Clickhouse, Apache Pinot). Exposure to data observability tools like Monte Carlo, Databand, or similar. Passionate about building high-scale, resilient, and secure data systems. Excited to support AI/ML innovation with state-of-the-art data infrastructure. Obsessed with automation, scalability, and best engineering practices. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Data Engineer - Google Cloud Location : Remote, India About Us Aviato Consulting is looking for a highly skilled and motivated Data Engineer to join our expanding team. This role is ideal for someone with a deep understanding of cloud-based data solutions, with a focus on Google Cloud (GCP) and associated technologies. GCP certification is mandatory for this position to ensure the highest level of expertise and professionalism. You will work directly with clients, translating their business requirements into scalable data solutions, while providing technical expertise and guidance. Key Responsibilities Client Engagement : Work closely with clients to understand business needs, gather technical requirements, and design solutions leveraging GCP services. Data Pipeline Design & Development : Build and manage scalable data pipelines using tools such as Apache Beam, Cloud Dataflow, and Cloud Composer. Data Warehousing & Lake Solutions : Architect, implement, and optimize BigQuery-based data lakes and warehouses. Real-Time Data Processing : Implement and manage streaming data pipelines using Kafka, Pub/Sub, and similar technologies. Data Analysis & Visualization : Create insightful data dashboards and visualizations using tools like Looker, Data Studio, or Tableau. Technical Leadership & Mentorship : Provide guidance and mentorship to team members and clients, helping them leverage the full potential of Google Cloud. Required Qualifications Experience : 5+ years as a Data Engineer working with cloud-based platforms. Proven experience in Python with libraries like Pandas and NumPy. Strong understanding and experience with FastAPI for building APIs. Expertise in building data pipelines using Apache Beam, Cloud Dataflow, or similar tools. Solid knowledge of Kafka for real-time data streaming. Proficiency with BigQuery, Google Pub/Sub, and other Google Cloud services. Familiarity with Apache Hadoop for distributed data processing. Technical Skills Strong understanding of data architecture and processing techniques. Experience with big data environments and tools like Apache Hadoop. Solid understanding of ETL pipelines, data ingestion, transformation, and storage. Knowledge of data modeling, data warehousing, and big data management principles. Certifications Google Cloud certification (Professional Data Engineer, Cloud Architect) is mandatory for this role. Soft Skills Excellent English communication skills. Client-facing experience and the ability to manage client relationships effectively. Strong problem-solving skills with a results-oriented approach. Preferred Qualifications Visualization Tools : Experience with tools like Looker, Power BI, or Tableau. Benefits Competitive salary and benefits package. Opportunities to work with cutting-edge cloud technologies with large customers. Collaborative work environment that encourages learning and professional growth. A chance to work on high-impact projects for leading clients in diverse industries. If you're passionate about data engineering, cloud technologies, and solving complex data problems for clients, wed love to hear from you! (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

14.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Position Overview We are seeking a dynamic and experienced Program Manager to lead and oversee the Data Governance Program for a large banking organization. The Program Manager will be responsible for the successful execution of data governance initiatives, ensuring compliance with regulatory requirements, promoting data quality, and fostering a culture of data stewardship across the enterprise. This role requires a strategic thinker with exceptional leadership, communication, and organizational skills to align cross-functional teams and drive the adoption of governance frameworks. Key Responsibilities Program Leadership : Develop and execute a comprehensive Data Governance strategy aligned with the organization's objectives and regulatory requirements. Act as a liaison between senior leadership, stakeholders, and cross-functional teams to ensure program alignment and success. Drive organizational change to establish a culture of data governance and stewardship. Great focus on program risk identification and timely reporting and devising action to address it. Cost benefit analysis and justification to investments. Planning And Project Management Project planning, scheduling & tracking Work prioritization and resource planning Risk identification and reporting Team planning and management Status reporting Governance Framework Implementation Establish and manage a robust Data Governance framework, including policies, standards, roles, and responsibilities. Implement data cataloging, metadata management, and data lineage tools to enhance data visibility and accessibility. Oversee the creation of workflows and processes to ensure adherence to governance policies. Stakeholder Engagement Reports to CXO level executives with program status update, risk management and outcomes. Collaborate with business units, IT teams, and compliance officers to identify governance priorities and resolve data-related challenges. Facilitate the Data Governance Council meetings and ensure effective decision-making. Serve as a point of contact for internal and external auditors regarding data governance-related queries. Compliance And Risk Management Ensure adherence to industry regulations and banking-specific compliance requirements. Identify and mitigate risks related to data usage, sharing, and security. Monitoring And Reporting Develop key performance indicators (KPIs) and metrics to measure the effectiveness of the Data Governance Program. Provide regular updates to CXO level executive leadership on program status, risks, and outcomes. Prepare and present audit and compliance reports as required. Team Leadership And Mentorship Lead cross-functional teams, including data stewards, analysts, and governance professionals. Provide training and mentoring to promote awareness and understanding of data governance practices. Technical Expertise Understanding of data engineering principles and practices: Good understanding of data pipelines, data storage solutions, data quality concepts, and data security is crucial. Familiarity with data engineering tools and technologies: This may include knowledge of ETL/ELT tools, Informatica IDMC, MDM, data warehousing solutions, Collabra data quality, cloud platforms (AWS, Azure, GCP), and data governance frameworks Qualifications Bachelor's degree in computer science, Data Management, Business Administration, or a related field; MBA or equivalent experience preferred. 14+ years of experience in program management, with at least 6+ years focused on data governance or data management with MDM in the banking or financial services sector. Strong knowledge of data governance frameworks, principles, and tools (e.g., Collibra, Informatica, Alation). Experience with regulatory compliance requirements for the banking industry, such as GDPR, CCPA, BCBS 239, and AML/KYC regulations. Proven track record of successfully managing large, complex programs with cross-functional teams. Excellent communication and stakeholder management skills, with the ability to influence and align diverse groups. Familiarity with data analytics, data quality management, and enterprise architecture concepts. Certification in program or project management (e.g., PMP, PRINCE2) or data governance (e.g., DGSP, CDMP) is a plus. Key Competencies Strong strategic thinking and problem-solving skills. Ability to work under pressure and manage multiple priorities. Exceptional leadership and interpersonal skills. Proficiency in program management tools and methodologies. Strong analytical and decision-making capabilities (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

JD For Data Modeler Key Requirements : Total 8 + years of Experience with 8 years of hands-on experience in data modelling Expertise in conceptual, logical, and physical data modeling Proficient in tools such as Erwin, SQL DBM, or similar Strong understanding of data governance and database design best practices Excellent communication and collaboration skills Having 8+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenarios like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Role Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of The Role : We are looking for an experienced Senior Data Engineer to join our growing team. As a key member of our data team, you will design, build, and maintain scalable data pipelines and infrastructure to enable data-driven decision-making across the organization. This role is ideal for a proactive, detail-oriented individual passionate about optimizing and leveraging data for impactful business : Work closely with cross-functional teams to translate our business vision into impactful data solutions. Drive the alignment of data architecture requirements with strategic goals, ensuring each solution not only meets analytical needs but also advances our core objectives. 3, Be pivotal in bridging the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights. Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every : Achieve and maintain a data accuracy rate of at least 99% for all business-critical dashboards by start of day (accounting for corrections and job failures), with a 24-business hour detection of error and 5-day correction SLA. 95% of data on dashboards originates from technical data pipelines to mitigate data drift. Set up strategic dashboards based on Business Needs which are robust, scalable, easy and quick to operate and maintain. Reduce costs of data warehousing and pipelines by 30%, then maintaining costs as data needs grow. Achieve 50 eNPS on data services (e.g. dashboards) from key business : Data Pipeline Development : Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data. Data Modeling : Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements. Infrastructure Management : Architect, deploy, and maintain cloud-based data platforms (e.g. , AWS, GCP). Collaboration : Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions, including designing and implementing robust, efficient and scalable data visualization on Tableau or LookerStudio. Data Governance : Ensure data quality, consistency, and security through robust validation and monitoring frameworks. Performance Optimization : Monitor, troubleshoot, and optimize the performance of data systems and pipelines. Innovation : Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering & Qualifications : Experience : 5+ years in data engineering, software engineering, or a related field. Technical Proficiency Proficiency in working with relational databases (e.g. , PostgreSQL, MySQL) and NoSQL databases (e.g. , MongoDB, Cassandra). Familiarity with big data frameworks like Hadoop, Hive, Spark, Airflow, BigQuery, etc. Strong expertise in programming languages such as Python, NodeJS, SQL etc. Cloud Platforms : Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services. Data Warehousing : Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc. Tools & Frameworks : Expertise in version control systems (e.g. , Git), CI/CD, JIRA pipelines. Big Data Ecosystems / BI : BigQuery, Tableau, LookerStudio. Industry Domain Knowledge : Google Analytics (GA), Hubspot, Accounting/Compliance etc. Soft Skills : Excellent problem-solving abilities, attention to detail, and strong communication Qualifications : Degree in Computer Science, Engineering, or a related field. Experience with real-time data streaming technologies (e.g. , Kafka, Kinesis). Familiarity with machine learning pipelines and tools. Knowledge of data security best practices and regulatory The Interview Process : The successful candidate will participate in the below interview stages (note that the order might be different to what you read below). We anticipate the process to last no more than 3 weeks from start to finish. Whether the interviews are held over video call or in person will depend on your location and the role. Case study. A : 60 minute chat with the Data Analyst, where they will give you some real-life challenges that this role faces, and will ask for your approach to solving them. Career deep dive. A : 60 minute chat with the Hiring Manager (COO). They'll discuss your last 1-2 roles to understand your experience in more detail. Behavioural fit assessment. A : 60 minute chat with our Head of HR or Head of Hiring, where they will dive into some of your recent work situations to understand how you think and work. Offer + reference interviews. We'll Make a Non-binding Offer Verbally Or Over Email, Followed By a Couple Of Short Phone Or Video Calls With References That You Provide To For Background Screening Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Job Description Role : Azure Data bricks Developer Overview : Data Engineer having good experience on Azure Databricks and Python Must Have : Databricks, Python, Azure Good to have : ADF Requirements Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 7-10 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

10.0 - 15.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Job Title : Director AI Automation & Data Sciences Experience Required : 10- 15 Years Industry : Legal Technology / Cybersecurity / Data Science Department : Technology & Innovation About The Role We are seeking an exceptional Director AI Automation & Data Sciences to lead the innovation engine behind our Managed Document Review and Cyber Incident Response services. This is a senior leadership role where youll leverage advanced AI and data science to drive automation, scalability, and differentiation in service delivery. If you are a visionary leader who thrives at the intersection of technology and operations, this is your opportunity to make a global impact. Why Join Us Cutting-edge AI & Data Science technologies at your fingertips Globally recognized Cyber Incident Response Team Prestigious clientele of Fortune 500 companies and industry leaders Award-winning, inspirational workspaces Transparent, inclusive, and growth-driven culture Industry-best compensation that recognizes excellence Key Responsibilities (KRAs) Lead and scale AI & data science initiatives across Document Review and Incident Response programs Architect intelligent automation workflows to streamline legal review, anomaly detection, and threat analytics Drive end-to-end deployment of ML and NLP models into production environments Identify and implement AI use cases that deliver measurable business outcomes Collaborate with cross-functional teams including Legal Tech, Cybersecurity, Product, and Engineering Manage and mentor a high-performing team of data scientists, ML engineers, and automation specialists Evaluate and integrate third-party AI platforms and open-source tools for accelerated innovation Ensure AI models comply with privacy, compliance, and ethical AI principles Define and monitor key metrics to track model performance and automation ROI Stay abreast of emerging trends in generative AI, LLMs, and cybersecurity analytics Technical Skills & Tools Proficiency in Python, R, or Scala for data science and automation scripting Expertise in Machine Learning, Deep Learning, and NLP techniques Hands-on experience with LLMs, Transformer models, and Vector Databases Strong knowledge of Data Engineering pipelines ETL, data lakes, and real-time analytics Familiarity with Cyber Threat Intelligence, anomaly detection, and event correlation Experience with platforms like AWS SageMaker, Azure ML, Databricks, HuggingFace Advanced use of TensorFlow, PyTorch, spaCy, Scikit-learn, or similar frameworks Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for ML Ops Strong command of SQL, NoSQL, and big data tools (Spark, Kafka) Qualifications Bachelors or Masters in Computer Science, Data Science, AI, or a related field 10- 15 years of progressive experience in AI, Data Science, or Automation Proven leadership of cross-functional technology teams in high-growth environments Experience working in LegalTech, Cybersecurity, or related high-compliance industries preferred (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Linkedin logo

About The Company Implement Architecture and design from definition phase to go-live the Role : Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Handson development of AI lead solution. Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism. Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications. Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management. Owning research assignments and development. Leading, developing and assisting developers & other team members. Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project. Define and deliver against the solution benefits statement. Positively and constructively engage with clients and operations teams efforts where : Implement Architecture and design from definition phase to go-live phase. Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Handson development of AI lead solution. Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism. Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications. Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management. Owning research assignments and development. Leading, developing and assisting developers & other team members. Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project. Define and deliver against the solution benefits statement. Positively and constructively engage with clients and operations teams efforts where : A Bachelor's degree in Computer Science, Software Engineering, or a related Skills : Minimum 5 years of IT experience including 3+ years of experience as Full stack developer preferably using Python skills 2+ years of hands-on experience in Azure Data factory, Azure Databricks / Spark (familiarity with fabric), Azure Data Lake storage (Gen1/Gen2), Azure Synapse/SQL DW Expertise in designing/deploying data pipeline, from data crawling, ETL, Data warehousing, data applications on Azure Experienced in AI technology including: Machine Learning algorithms, Natural Language Processing, Deep Learning, Image Recognition, Speech Recognition etc. Proficient in programming languages like Python (Full Stack exposure) Proficient in dealing with all the layers in solution; multi-channel presentation, business logic in middleware, data access layer, RDBMS | NO-SQL; E.g. MySQL, MongoDB, Cassendra, SQL Server DBs Familiar with Vector DB such as: FAISS, CromaDB, PineCone, Weaveate, Feature Store Experience in implementing and deploying applications on Azure Proficient in creating technical documents like Architecture views, Technology Architecture blueprint and design specification Experienced in using tools like Rational suite, Enterprise Architect, Eclipse, and Source code versioning systems like Git Experience with different development methodologies (RUP | Scrum | Skills : None range and compensation package : None Opportunity Statement : Include a statement on commitment to diversity and inclusivity. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Sadar, Uttar Pradesh, India

On-site

Linkedin logo

Roles & Responsibilities Lead the design, configuration, and implementation of SAP S/4 HANA Finance & Controlling modules. Collaborate with business stakeholders to gather and analyze requirements, and translate them into effective SAP solutions. Perform fit/gap analysis, effort estimation, and develop functional specifications for custom developments. Conduct unit testing, integration testing, and user acceptance testing (UAT) to ensure the quality and functionality of the implemented solutions. Manage data migration processes, including data extraction, transformation, and loading (ETL) activities. Provide ongoing support and maintenance for SAP S/4 HANA Finance modules, addressing any functional issues or enhancements. Develop and deliver end-user training and documentation to ensure successful adoption of the implemented solutions. Work closely with cross-functional teams, including IT, finance, and business units, to ensure seamless integration and alignment of SAP solutions with business processes. Stay updated with the latest SAP technologies and best practices, and provide recommendations for continuous improvement. Lead and manage AMS projects, ensuring timely resolution of incidents, service requests, and change requests related to SAP FICO Skills : SAP FICO S/4Hana implementation Minimum 2 rollouts Banking or Manufacturing domain experience is : BE/B.Tech/ MCA (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Telangana, India

On-site

Linkedin logo

JOB DESCRIPTION Design and develop QlikView and Qlik Sense dashboards and reports. Collaborate with business stakeholders to gather and understand requirements. Perform data extraction, transformation, and loading (ETL) processes. Optimize Qlik applications for performance and usability. Ensure data accuracy and consistency across all BI solutions. Conduct testing and validation of Qlik applications. Provide ongoing support and troubleshooting for Qlik solutions. Stay up-to-date with the latest Qlik technologies and industry trends. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. 2+ years of experience in Qlik development (QlikView and Qlik Sense). Strong understanding of data visualization best practices. Proficiency in SQL and data modeling. Experience with ETL processes and tools. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. QUALIFICATIONS Design and develop QlikView and Qlik Sense dashboards and reports. Collaborate with business stakeholders to gather and understand requirements. Perform data extraction, transformation, and loading (ETL) processes. Optimize Qlik applications for performance and usability. Ensure data accuracy and consistency across all BI solutions. Conduct testing and validation of Qlik applications. Provide ongoing support and troubleshooting for Qlik solutions. Stay up-to-date with the latest Qlik technologies and industry trends. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. 2+ years of experience in Qlik development (QlikView and Qlik Sense). Strong understanding of data visualization best practices. Proficiency in SQL and data modeling. Experience with ETL processes and tools. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

Linkedin logo

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description As a BI Analyst specializing in any of BI tools, your role focuses on leveraging your expertise in BI tools to research, design, create, and implement data models, reports/dashboards, and data-driven applications. You will spearhead the design and implementation of end-to-end BI solutions, actively identifying and resolving data quality issues. As a collaborative team member, you will closely engage with other development teams within the organization to strategize, coordinate, and deliver solutions aligned with client priorities and internal product roadmaps. Responsibilities The Business Intelligence (BI) Analyst works directly with business stakeholders to create data-based solutions that analyze performance and maintain operations Design data models and reports. Translate requests from business stakeholders into actionable reports Meet with business stakeholders to clarify requirements and communicate progress Collaborate with team members Write and troubleshoot SQL queries Prep data for reports and analysis Peer review work of other team members Plainly communicate technical issues and concepts to business stakeholders Support automated report distributions Promote reports from development to test to production Flexible to work on any BI platform Required Skills Graduate with 4 - 7 years of experience, 3+ years of relevant experience is mandatory. Extensive experience with Power BI & Tableau, and SQL / relational databases. Extensive experience in dimensional data modeling; star schemas, snowflakes, denormalized models, and handling - slow-changing- dimensions/attributes. Strong understanding of disciplined approaches to Data Visualization and Reporting. Experience in understanding complex ETL processes, involving relational and non-relational data. Proven record or experience in working with Apache Superset/Power BI/Tableau/Looker. Connect and Harmonize (both structured & unstructured) data across third-party data platforms. Draw insights from data and action it through alerts & customizable publishing tools. Certification in Power BI or any BI tool is preferred. Ability to lead cross-functional team communication and develop cross-functional partnerships. Excellent problem-solving and data analysis skills. Familiarity with our Data Technology Stack (SQL Server, Amazon Redshift, S3, Athena and Snowflake) Experience in Media/Marketing industry or domain is a plus Ability to quickly grasp existing systems, goals and tech. options for given situations. Experienced complete lifecycle of at least one data warehouse/Business Intelligence program at Enterprise scale. Experience in Agile Delivery & Agile tools like JIRA/Azure DevOps is a plus Preferred Skills Structured thinker, result-oriented, passionate about data-driven decision-making, particularly leveraging Apache Superset/Tableau/Power BI. Passion for problem-solving, developing reports & dashboards Should possess good communication skills. Should be a very good team player with a go-getter attitude, results-driven, adaptable, inspirational, organized and quality focused Ability to handle complex problems from design to execution and deliver in time bound manner under constraints. Minimum Education Required Bachelor’s degree in Computer Science, or related quantitative field required (master’s degree in business administration preferred). Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

Linkedin logo

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description As a BI Analyst specializing in any of BI tools, your role focuses on leveraging your expertise in BI tools to research, design, create, and implement data models, reports/dashboards, and data-driven applications. You will spearhead the design and implementation of end-to-end BI solutions, actively identifying and resolving data quality issues. As a collaborative team member, you will closely engage with other development teams within the organization to strategize, coordinate, and deliver solutions aligned with client priorities and internal product roadmaps. Responsibilities The Business Intelligence (BI) Analyst works directly with business stakeholders to create data-based solutions that analyse performance and maintain operations Design data models and reports. Translate requests from business stakeholders into actionable reports Meet with business stakeholders to clarify requirements and communicate progress Collaborate with team members in a one-on-one setting Write and troubleshoot SQL queries Prep data for reports and analysis Peer review work of other team members Plainly communicate technical issues and concepts to business stakeholders Support automated report distributions Promote reports from development to test to production Required Skills Graduate with 3 - 7 years of experience, 3+ years relevant experience is mandatory. Extensive experience with Apache Superset, Power BI, Tableau, or any BI platforms and SQL / relational databases. Extensive experience in dimensional data modeling; star schemas, snowflakes, denormalized models, and handling - slow-changing- dimensions/attributes. Strong understanding of disciplined approaches to Data Visualization and Reporting. Experience in understanding complex ETL processes, involving relational and non-relational data. Proven record or experience in working with Apache Superset/Power BI/Tableau/Looker. Connect and Harmonize (both structured & unstructured) data across third-party data platforms. Draw insights from data and action it through alerts & customizable publishing tools. Certification in Power BI or any BI tool is preferred. Ability to lead cross-functional team communication and develop cross-functional partnerships. Excellent problem-solving and data analysis skills. Familiarity with our Data Technology Stack (SQL Server, Amazon Redshift, S3, Athena and Snowflake) Experience in Media/Marketing industry or domain is a plus Ability to quickly grasp existing systems, goals and tech. options for given situations. Experienced complete lifecycle of at least one data warehouse/Business Intelligence program at Enterprise scale. Experience in Agile Delivery & Agile tools like JIRA/Azure DevOps is a plus Preferred Skills Structured thinker, result-oriented, passionate about data-driven decision-making, particularly leveraging Apache Superset/Tableau/Power BI. Passion for problem-solving, developing reports & dashboards Should possess good communication skills. Should be a very good team player with a go-getter attitude, results-driven, adaptable, inspirational, organized and quality focused Ability to handle complex problems from design to execution and deliver in time bound manner under constraints. Minimum Education Required Bachelor’s degree in Computer Science, or related quantitative field required (master’s degree in business administration preferred). Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

Linkedin logo

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description As a BI Analyst specializing in any of BI tools, your role focuses on leveraging your expertise in BI tools to research, design, create, and implement data models, reports/dashboards, and data-driven applications. You will spearhead the design and implementation of end-to-end BI solutions, actively identifying and resolving data quality issues. As a collaborative team member, you will closely engage with other development teams within the organization to strategize, coordinate, and deliver solutions aligned with client priorities and internal product roadmaps. Responsibilities The Business Intelligence (BI) Analyst works directly with business stakeholders to create data-based solutions that analyze performance and maintain operations Design data models and reports. Translate requests from business stakeholders into actionable reports Meet with business stakeholders to clarify requirements and communicate progress Collaborate with team members Write and troubleshoot SQL queries Prep data for reports and analysis Peer review work of other team members Plainly communicate technical issues and concepts to business stakeholders Support automated report distributions Promote reports from development to test to production Flexible to work on any BI platform Required Skills Graduate with 2 - 4 years of experience, 2+ years of relevant experience is mandatory. Extensive experience with Power BI & Tableau, and SQL / relational databases. Extensive experience in dimensional data modeling; star schemas, snowflakes, denormalized models, and handling - slow-changing- dimensions/attributes. Strong understanding of disciplined approaches to Data Visualization and Reporting. Experience in understanding complex ETL processes, involving relational and non-relational data. Proven record or experience in working with Apache Superset/Power BI/Tableau/Looker. Connect and Harmonize (both structured & unstructured) data across third-party data platforms. Draw insights from data and action it through alerts & customizable publishing tools. Certification in Power BI or any BI tool is preferred. Ability to lead cross-functional team communication and develop cross-functional partnerships. Excellent problem-solving and data analysis skills. Familiarity with our Data Technology Stack (SQL Server, Amazon Redshift, S3, Athena and Snowflake) Experience in Media/Marketing industry or domain is a plus Ability to quickly grasp existing systems, goals and tech. options for given situations. Experienced complete lifecycle of at least one data warehouse/Business Intelligence program at Enterprise scale. Experience in Agile Delivery & Agile tools like JIRA/Azure DevOps is a plus Preferred Skills Structured thinker, result-oriented, passionate about data-driven decision-making, particularly leveraging Apache Superset/Tableau/Power BI. Passion for problem-solving, developing reports & dashboards Should possess good communication skills. Should be a very good team player with a go-getter attitude, results-driven, adaptable, inspirational, organized and quality focused Ability to handle complex problems from design to execution and deliver in time bound manner under constraints. Minimum Education Required Bachelor’s degree in Computer Science, or related quantitative field required (master’s degree in business administration preferred). Show more Show less

Posted 1 day ago

Apply

12.0 - 20.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

Linkedin logo

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. The companies are headquartered in St. Petersburg, FL, U.S.A., with their global delivery centers in Mysuru and Bengaluru, Karnataka, India. Job Description The Group Product Manager will lead the strategic development and enhancement of our proprietary business intelligence platform, iSOCRATES MADTechAI, as well as other innovative products. This role demands a deep understanding of technology, strong analytical skills, and a collaborative mindset to evaluate product potential, oversee the product lifecycle, and ensure alignment with both client-partner and internal needs. Key Responsibilities Product Management and Strategy: Lead the strategic vision and execution of iSOCRATES MADTechAI, focusing on feature enhancements and user experience improvements Conduct market research to identify customer needs within the AdTech, MarTech, and DataTech landscapes, translating them into actionable product requirements Prioritize product features based on business impact, customer feedback, and technical feasibility Product Development Lifecycle: Oversee the entire product development lifecycle, including conception, design, development, testing, and launch phases Utilize Agile methodologies (SCRUM, Kanban) to facilitate iterative development and continuous improvement Manage roadmaps, timelines, and deliverables using tools like Jira, ensuring projects are on track and risks are mitigated Technical Design and Architecture: SaaS Development: Deep understanding of SaaS architecture, deployment, and lifecycle management Cloud Platforms: Proficiency with cloud platforms (AWS required; Google Cloud and Azure preferred) AI and Machine Learning: Extensive experience with AI/ML concepts, tools, and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and their application in product development Data Engineering: Strong knowledge of data engineering principles, including ETL processes, data pipelines, and data modeling to ensure data integrity and availability for analytics Data Analytics: Strong knowledge of data analytics, data warehousing, and business intelligence tools (e.g., SQL, Tableau, PowerBI, Sisense) Natural Language Processing (NLP): Familiarity with NLP techniques and applications in product features to enhance user engagement and insights Microservices Architecture: Experience designing and implementing microservices architectures to enhance product scalability and maintainability ReactJS Technologies: Proficiency in ReactJS and related frameworks to ensure seamless front-end development and integration with back-end services Collaborate with engineering teams to define system architecture and design concepts that align with best practices in UX/UI Ensure the integration of various technologies, including APIs, AngularJS, Node.js, ReactJS, and MVC architecture into product offerings Strong hands-on experience in Product-Led Growth (PLG) strategies and Partner/Channel go-to-market approaches Cross-Functional Collaboration: Partner closely with the U.S. and India-based Partner Success teams to support pre-sales activities and customer engagement, acting as a subject matter expert in AdTech, MarTech, and DataTech Facilitate communication between product, engineering, marketing, and sales teams to ensure cohesive product strategy and execution Engage with external customers to gather feedback and drive product iterations Data Analysis and Insights: Design and implement client data analysis methodologies, focusing on data-driven decision-making processes relevant to AdTech, MarTech, and DataTech Develop analytics frameworks that leverage data science principles and advanced statistical methods to derive actionable insights for clients Monitor product performance metrics and develop KPIs to assess impact and identify areas for improvement, leveraging A/B testing and experimentation techniques Process Development and Improvement: Establish and refine processes for product management, ensuring repeatability and scalability Lead initiatives to enhance existing workflows, focusing on efficiency and effectiveness in product delivery Create and present progress reports, updates, and presentations to senior management and stakeholders Qualifications  Bachelor’s or Master’s degree in Computer Science, Data Science, or a related quantitative field MBA or specialized training in product management or data science is preferred 12 to 20 years of experience in technology product engineering and development, with a minimum of 10 years in product management Proven track record in managing complex products, especially in business intelligence or marketing technology domains Strong proficiency in BI platforms (e.g., Sisense, Tableau, PowerBI, Looker, DOMO) and data visualization tools Deep understanding of cloud platforms (AWS, Snowflake) and experience with database query languages (SQL, NoSQL) Expertise in API development and management, along with knowledge of front-end technologies (AngularJS, ReactJS, Bootstrap) In-depth knowledge of AI and NLP technologies, with experience in applying them to enhance product functionality Strong background in data engineering, including ETL processes, data warehousing, and data pipeline management Must have a strong understanding of digital advertising, including AdTech, MarTech, and DataTech technologies Experience in B2C and B2B SaaS product development, particularly in customer journey mapping and email marketing Strong analytical and problem-solving abilities, with a focus on data-driven outcomes Excellent communication and presentation skills, capable of articulating complex ideas to diverse audiences Collaborative and open-minded, fostering a culture of innovation and accountability High energy and enthusiasm for driving product success in a fast-paced environment Have extensive experience with Atlassian products including JIRA and Confluence Have extensive experience with Product Management and Monitoring Software Must be ready to relocate to Mysuru or Bengaluru Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Description: Data Engineer Role Overview The Data Engineer will be responsible for ensuring the availability, quality, and transformation of claims and operational data required for model development and integration. The role demands strong data pipeline design and engineering capabilities to support a scalable forecasting and capacity planning framework. Key Responsibilities Gather and process data from multiple sources including claims systems and operational databases. Build and maintain data pipelines to support segmentation and forecasting models. Ensure data integrity, transformation, and enrichment to align with modeling requirements. Collaborate with the Data Scientist to provide model-ready datasets. Support data versioning, storage, and automation for periodic refreshes. Assist in deployment/integration of data flows into operational dashboards or planning tools. Skills & Experience 5+ years of experience in data engineering or ETL development. Proficiency in SQL, Python, and data pipeline tools (e.g., Airflow, dbt, Spark, etc.). Experience with cloud-based data platforms (e.g., Azure, AWS, GCP). Understanding of data architecture and governance best practices. Prior experience working with insurance or operations-related data is a plus. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Purpose Seeking a Senior SQL Developer to join our data team in analyzing, developing SSIS projects, custom reports and work closely with the team on any SQL issues. This is an excellent opportunity for an ambitious and agile person looking to grow and learn in a very fast paced environment Duties And Responsibilities Develop SSIS projects to handle ETL, transmission, encryption, and archiving of files received and generated. SQL Database Tuning and Performance Design, develop, and maintain database tables, stored procedures, and supporting objects. Build and support operational reports for company and clients. Work with data team to provide operational support, resolve recurring problems. Document database topology, architecture, processes and procedures. Develop SQL queries and support ad hoc requests for data. Assist with capacity planning and resource expansion through data aggregation and analysis. Work with project managers to ensure that reports and metrics identify business needs and opportunities for process improvement. Identify inefficiencies in the database platform and provide solutions to the management. Use problem-solving skills to assist in resolution of business problems. Develop analytical skills to resolve technical problems. Identify root causes for problems and propose solutions to prevent recurring. Qualifications Requires a four-year degree in Computer Science/Information Technology Minimum five years working as a Database engineer or a related role. Minimum of three years SSIS experience Minimum of two years’ experience with C# and\or VB.NET Thorough understanding of database structures, theories, principles, & practices. Ability to write & troubleshoot SQL code & design stored procedures, functions, tables, views, triggers, indexes, & constraints. Extensive knowledge of MS SQL Server 2012 or later Extensive knowledge with SSRS\SSIS\T-SQL Experience with C# and\or VB.NET. Technical knowledge of MS SQL Server internals with emphasis on query performance. Knowledge and know-how to troubleshoot potential issues; experience with best practices around database operations. Require ability to work independently with minimal supervision Ability to multi-task with several complex and demanding Projects Working Conditions Physical Demands: While performing the duties of this job, the employee is occasionally required to move around the work area; Sit; perform manual tasks; operate tools and other office equipment such as computer, computer peripherals and telephones; extend arms; kneel; talk and hear. Mental Demands: The employee must be able to follow directions, collaborate with others, and handle stress. Work Environment: The noise level in the work environment is usually minimal. Med-Metrix will not discriminate against any employee or applicant for employment because of race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), parental status, national origin, age, disability, genetic information (including family medical history), political affiliation, military service, veteran status, other non-merit based factors, or any other characteristic protected by federal, state or local law. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At Cotality, we are driven by a single mission—to make the property industry faster, smarter, and more people-centric. Cotality is the trusted source for property intelligence, with unmatched precision, depth, breadth, and insights across the entire ecosystem. Our talented team of 5,000 employees globally uses our network, scale, connectivity and technology to drive the largest asset class in the world. Join us as we work toward our vision of fueling a thriving global property ecosystem and a more resilient society. Cotality is committed to cultivating a diverse and inclusive work culture that inspires innovation and bold thinking; it's a place where you can collaborate, feel valued, develop skills and directly impact the real estate economy. We know our people are our greatest asset. At Cotality, you can be yourself, lift people up and make an impact. By putting clients first and continuously innovating, we're working together to set the pace for unlocking new possibilities that better serve the property industry. Job Description In India, we operate as Next Gear India Private Limited, a fully-owned subsidiary of Cotality with offices in Kolkata, West Bengal, and Noida, Uttar Pradesh. Next Gear India Private Limited plays a vital role in Cotality's Product Development capabilities, focusing on creating and delivering innovative solutions for the Property & Casualty (P&C) Insurance and Property Restoration industries. While Next Gear India Private Limited operates under its own registered name in India, we are seamlessly integrated into the Cotality family, sharing the same commitment to innovation, quality, and client success. When you join Next Gear India Private Limited, you become part of the global Cotality team. Together, we shape the future of property insights and analytics, contributing to a smarter and more resilient property ecosystem through cutting-edge technology and insights. Company Description At CoreLogic, we are driven by a single mission—to make the property industry faster, smarter, and more people-centric. CoreLogic is the trusted source for property intelligence, with unmatched precision, depth, breadth, and insights across the entire ecosystem. Our talented team of 5,000 employees globally uses our network, scale, connectivity, and technology to drive the largest asset class in the world. Join us as we work toward our vision of fueling a thriving global property ecosystem and a more resilient society. CoreLogic is committed to cultivating a diverse and inclusive work culture that inspires innovation and bold thinking; it's a place where you can collaborate, feel valued, develop skills, and directly impact the insurance marketplace. We know our people are our greatest asset. At CoreLogic, you can be yourself, lift people up and make an impact. By putting clients first and continuously innovating, we're working together to set the pace for unlocking new possibilities that better serve the property insurance and restoration industry. Description We are seeking a highly skilled Lead Data Analyst to join our Analytics team to serve customers across the property insurance and restoration industries. As a Lead Data Analyst you will play a crucial role in developing methods and models to inform data-driven decision processes resulting in improved business performance for both internal and external stakeholder groups. You will be responsible for interpreting complex data sets and providing valuable insights to enhance the value of data assets. The successful candidate will have a strong understanding of data mining techniques, methods of statistical analysis, and data visualization tools. This position offers an exciting opportunity to work in a dynamic environment, collaborating with cross-functional teams to support decision processes that will guide the respective industries into the future. Responsibilities Collaborate with cross-functional teams to understand and document requirements for analytics products. Serve as the primary point of contact for new data/analytics requests and support for customers. Lead a team of analysts to deliver client deliverables on a timely manner. Act as the domain expert and voice of the customer to internal stakeholders during the analytics development process. Develop and maintain an inventory of data, reporting, and analytic product deliverables for assigned customers. Work with customer success teams to establish and maintain appropriate customer expectations for analytics deliverables. Create and manage tickets on behalf of customers within internal frameworks. Ensure timely delivery of assets to customers and aid in the development of internal processes for the delivery of analytics deliverables. Work with IT/Infrastructure teams to provide customer access to assets and support internal audit processes to ensure data security. Create and optimize complex SQL queries for data extraction, transformation, and aggregation. Develop and maintain data models, dashboards, and reports to visualize data and track key performance metrics. Conduct validation checks and implement error handling mechanisms to ensure data reliability. Collaborate closely with stakeholders to align project goals with business needs and perform ad-hoc analysis to provide actionable recommendations. Analyze large and complex datasets to identify trends, patterns, and insights, and present findings and recommendations to stakeholders in a clear and concise manner Job Qualifications 7+ years’ property insurance experience preferred 5+ years’ experience in management of mid-level professional teams or similar leadership position with a focus on data and/or performance management. Extensive experience in applying and/or developing performance management metrics for claims organizations. Bachelor’s degree in computer science, data science, statistics, or a related field is preferred. Mastery level knowledge of data analysis tools such as Excel, Tableau or Power BI. Demonstrated expertise in Power BI creating reports and dashboards, including the ability to connect to various data sources, prepare and model data, and create visualizations. Expert knowledge of DAX for creating calculated columns and measures to meet report-specific requirements. Expert knowledge of Power Query for importing, transforming, and shaping data. Proficiency in SQL with the ability to write complex queries and optimize performance. Strong knowledge of ETL processes, data pipeline and automation a plus. Proficiency in managing tasks with Jira is an advantage. Strong analytical and problem-solving skills. Excellent attention to detail and the ability to work with large datasets. Effective communication skills, both written and verbal. Excellent visual communications and storytelling with data skills. Ability to work independently and collaborate in a team environment. Cotality's Diversity Commitment Cotality is fully committed to employing a diverse workforce and creating an inclusive work environment that embraces everyone’s unique contributions, experiences and values. We offer an empowered work environment that encourages creativity, initiative and professional growth and provides a competitive salary and benefits package. We are better together when we support and recognize our differences. Equal Opportunity Employer Statement Cotality is an Equal Opportunity employer committed to attracting and retaining the best-qualified people available, without regard to race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, record of offences, age, marital status, family status or disability. Cotality maintains a Drug-Free Workplace. Please apply on our website for consideration. Privacy Policy Global Applicant Privacy Policy By providing your telephone number, you agree to receive automated (SMS) text messages at that number from Cotality regarding all matters related to your application and, if you are hired, your employment and company business. Message & data rates may apply. You can opt out at any time by responding STOP or UNSUBSCRIBING and will automatically be opted out company-wide. Connect with us on social media! Click on the quicklinks below to find out more about our company and associates Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week.The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. A key aspect of the MDLZ Google cloud BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes man '8+ years of overall industry experience and minimum of 8-10 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Programming: Understanding of OOPs concepts and hands-on experience with Python/Java for programming and scripting. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Rich experience in working with FMCG industry. Deep knowledge in manipulating, processing, and extracting value from datasets; + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. A key aspect of the MDLZ DataHub Google BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9. '6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; Atleast 2 years of FMCG/CPG industry experience. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 1 day ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

JOB_POSTING-3-71427 Job Description Role Title: Analyst, Data Sourcing – Metadata (L08) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Data Sourcing - Metadata (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role is responsible for supporting metadata management processes within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. This role focuses on assisting with metadata harvesting, maintaining data dictionaries, and supporting the tracking of data lineage. The analyst will collaborate closely with senior team members to ensure access to accurate, well-governed metadata for analytics and reporting. Key Responsibilities Implement and maintain metadata management processes across Synchrony’s Public and Private cloud and on-prem environments, ensuring accurate integration with technical and business Metadata catalogs. Work with the Data Architecture and Data Usage teams to track data lineage, traceability, and compliance, identifying and escalating metadata-related issues. Document technical specifications, support solution design, participate in agile development, and release cycles for metadata initiatives. Adhere to data management policies, track KPIs for Metadata effectiveness and assist in assessment of metadata risks to strengthen governance. Maintain stable operations, troubleshoot metadata and lineage issues, and contribute to continuous process improvements to improve data accessibility. Required Skills Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Minimum of 1 years’ experience in data management, focusing on metadata management, data governance, or data lineage, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Basic understanding of metadata management concepts, familiarity with data cataloging tools (e.g., AWS Glue Data Catalog, AbInitio, Collibra), basic proficiency in data lineage tracking tools (e.g., Apache Atlas, AbInitio, Collibra), and understanding of data integration technologies (e.g., ETL, APIs, data pipelines). Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Skills AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Familiarity with hybrid cloud environments (combination of cloud and on-prem). Skilled in Ab Initio Metahub development and support including importers, extractors, Metadata Hub database extensions, technical lineage, QueryIT, Ab Initio graph development, Ab Initio Control center and Express IT Experience with harvesting technical lineage and producing lineage diagrams. Familiarity with Unix, Linux, Stonebranch, and familiarity with database platforms such as Oracle and Hive Basic knowledge of SQL and data query languages for managing and retrieving metadata. Understanding of data governance frameworks (e.g., EDMC DCAM, GDPR compliance). Familiarity with Collibra Eligibility Criteria Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: 2PM - 11PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8 Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 08 Job Family Group Information Technology Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Sr Devloper with special emphasis and experience of 8 to 10 years on Pyspark and Python along with ETL Tools ( Talend / Ab initio / informatica / Similar) . Also have good exposure to ETL tools to understand the flow and rewrite them into Python and Pyspark and executing the test plans.8-10 years of experience in designing and developing Pyspark applications and ETL Jobs using ETL Tools. 5+ years of sound knowledge on Pyspark to implement ETL logics. Strong understanding of frontend technologies such as HTML, CSS, React & JavaScript. Proficiency in data modeling and design, including PL/SQL development Creating test plans to understand current ETL flow and rewriting them to Pyspark. Providing ongoing support and maintenance for ETL applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews, Continuous Integration. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

BigData oracle PySpark Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Join us as Lead Data Engineer at Barclays, where you'll spearhead the evolution of our digital landscape driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. To be successful as a Lead Data Engineeryou should have experience with: Strong knowledge of ETL and dependent technologies in the below scope. Python. Extensive hands-on PySpark. Strong SQL knowledge. Strong Understanding of Data warehousing and Data lakes. Requirement Gathering & Analysis and other SDLC phases. Data Warehousing concept. AWS working exposure. Big Data Hadoop. Experience in Relational Databases like Oracle, SQL Server, and PL/SQL. Understanding of Agile methodologies as well as SDLC life cycles and processes.. Expertise in UNIX scripts, DB & TWS. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role Facilitates and supports Agile teams by ensuring they follow Scrum principles. To remove obstacles, enhance team collaboration, and ensure smooth communication, enabling the team to focus on delivering high-quality, iterative results. Facilitate Scrum events, promote continuous improvement, and act as a bridge between the team and external stakeholders. Accountabilities Facilitate Events: Facilitate events, as needed, and ensure that all events take place and are positive, productive, and kept within the timebox Support Iteration Execution: Ensure quality of ceremony artefacts and continuous customer value through iteration execution, maintain backlog refinement, and iterate on stakeholder feedback Optimize Flow: Identify and facilitate the removal of conflict impacting team flow, utilizing metrics to empower the team to communicate effectively, making all work visible Mitigate Risks: Identify and escalate risks to remove impediments and shield the Squad from interruptions Build High-Performing Teams: Foster and coach Agile Team attributes and continuous improvement, encourage stakeholder collaboration, deputise ‘in the moment leadership’, and drive high-performing team attributes Stakeholder Management: Facilitate stakeholder collaboration (e.g., business stakeholders, product teams, vendors) and build trust with stakeholders Governance and Reporting: Ensure data quality and provide representation at required governance forums, if applicable Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role. Key job responsibilities Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more scripting language (e.g., Python, KornShell) 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3009499 Show more Show less

Posted 1 day ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies